From patchwork Sun Jun 30 18:05:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55677 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60A0F1B99D; Sun, 30 Jun 2019 20:06:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D09041B99C for ; Sun, 30 Jun 2019 20:06:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5cCH016218; Sun, 30 Jun 2019 11:06:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=lsBTQ78nvqwhEaqCODP8KZ3YaaqspG0E6diREdnbf7c=; b=RZxn1+Se//7Hw7H8LXVHo7YJg566PscBIBWKLCMFZgpoB+3R5dMC9WWgaHpbqHvC64ep wnGy4DFSnpTRP9jq/PBVG4rXQD95Pgpj3oRJdtaUFn8fVfn1Va4gnUVABPWnA3TU1wk5 DW50BKvPaTpTHa00ibY3NPgk5d6x6QRd1xrr/TmRWS2ua7WC64u3f6uoL3hg51gx4G8p lHSAZ8SnhBtMFCy+WrhxEm2NrRimr88Lrym6K/LDdhL6KP3hLAGU+bTbp/sSZ6ORDRmO NLzorwTJWGOQ7vfSB8ryUYUqRoiBXUDWXtSuYSEjDS2ytnAWMeYi7qgHSrd1WFAIkPGX FA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y6t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:37 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:36 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:36 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B3FD93F703F; Sun, 30 Jun 2019 11:06:33 -0700 (PDT) From: To: , Thomas Monjalon , John McNamara , Marko Kovacevic , "Jerin Jacob" , Nithin Dabilpuram , Kiran Kumar K , Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:13 +0530 Message-ID: <20190630180609.36705-2-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 01/57] net/octeontx2: add build and doc infrastructure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Adding bare minimum PMD library and doc build infrastructure and claim the maintainership for octeontx2 PMD. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Kiran Kumar K --- MAINTAINERS | 9 ++++++ config/common_base | 5 +++ doc/guides/nics/features/octeontx2.ini | 9 ++++++ doc/guides/nics/features/octeontx2_vec.ini | 9 ++++++ doc/guides/nics/features/octeontx2_vf.ini | 9 ++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/octeontx2.rst | 32 +++++++++++++++++++ doc/guides/platform/octeontx2.rst | 3 ++ drivers/net/Makefile | 1 + drivers/net/meson.build | 6 +++- drivers/net/octeontx2/Makefile | 30 +++++++++++++++++ drivers/net/octeontx2/meson.build | 9 ++++++ drivers/net/octeontx2/otx2_ethdev.c | 3 ++ .../octeontx2/rte_pmd_octeontx2_version.map | 4 +++ mk/rte.app.mk | 2 ++ 15 files changed, 131 insertions(+), 1 deletion(-) create mode 100644 doc/guides/nics/features/octeontx2.ini create mode 100644 doc/guides/nics/features/octeontx2_vec.ini create mode 100644 doc/guides/nics/features/octeontx2_vf.ini create mode 100644 doc/guides/nics/octeontx2.rst create mode 100644 drivers/net/octeontx2/Makefile create mode 100644 drivers/net/octeontx2/meson.build create mode 100644 drivers/net/octeontx2/otx2_ethdev.c create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 24431832a..37fb91d64 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -684,6 +684,15 @@ F: drivers/net/mvneta/ F: doc/guides/nics/mvneta.rst F: doc/guides/nics/features/mvneta.ini +Marvell OCTEON TX2 +M: Jerin Jacob +M: Nithin Dabilpuram +M: Kiran Kumar K +T: git://dpdk.org/next/dpdk-next-net-mrvl +F: drivers/net/octeontx2/ +F: doc/guides/nics/features/octeontx2*.rst +F: doc/guides/nics/octeontx2.rst + Mellanox mlx4 M: Matan Azrad M: Shahaf Shuler diff --git a/config/common_base b/config/common_base index e700bf1e7..6cc44b65a 100644 --- a/config/common_base +++ b/config/common_base @@ -411,6 +411,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n # CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y +# +# Compile burst-oriented Marvell OCTEON TX2 network PMD driver +# +CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y + # # Compile WRS accelerated virtual port (AVP) guest PMD driver # diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini new file mode 100644 index 000000000..84d5ad779 --- /dev/null +++ b/doc/guides/nics/features/octeontx2.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'octeontx2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +ARMv8 = Y +Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini new file mode 100644 index 000000000..5fd7e4c5c --- /dev/null +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'octeontx2_vec' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +ARMv8 = Y +Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini new file mode 100644 index 000000000..3128cc120 --- /dev/null +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'octeontx2_vf' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +ARMv8 = Y +Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index d664c4592..9fec02f3e 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -46,6 +46,7 @@ Network Interface Controller Drivers nfb nfp octeontx + octeontx2 qede sfc_efx softnic diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst new file mode 100644 index 000000000..f0bd36be3 --- /dev/null +++ b/doc/guides/nics/octeontx2.rst @@ -0,0 +1,32 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2019 Marvell International Ltd. + +OCTEON TX2 Poll Mode driver +=========================== + +The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev +driver support for the inbuilt network device found in **Marvell OCTEON TX2** +SoC family as well as for their virtual functions (VF) in SR-IOV context. + +More information can be found at `Marvell Official Website +`_. + +Features +-------- + +Features of the OCTEON TX2 Ethdev PMD are: + + +Prerequisites +------------- + +See :doc:`../platform/octeontx2` for setup information. + +Compile time Config Options +--------------------------- + +The following options may be modified in the ``config`` file. + +- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``) + + Toggle compilation of the ``librte_pmd_octeontx2`` driver. diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst index c9ea45647..d2592f119 100644 --- a/doc/guides/platform/octeontx2.rst +++ b/doc/guides/platform/octeontx2.rst @@ -98,6 +98,9 @@ HW Offload Drivers This section lists dataplane H/W block(s) available in OCTEON TX2 SoC. +#. **Ethdev Driver** + See :doc:`../nics/octeontx2` for NIX Ethdev driver information. + #. **Mempool Driver** See :doc:`../mempool/octeontx2` for NPA mempool driver information. diff --git a/drivers/net/Makefile b/drivers/net/Makefile index a1d45d9cb..5767fdf65 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -47,6 +47,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx +DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2 DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring diff --git a/drivers/net/meson.build b/drivers/net/meson.build index 86e704e13..513f19b33 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -33,7 +33,11 @@ drivers = ['af_packet', 'netvsc', 'nfb', 'nfp', - 'null', 'octeontx', 'pcap', 'qede', 'ring', + 'null', + 'octeontx', + 'octeontx2', + 'pcap', + 'ring', 'sfc', 'softnic', 'szedata2', diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile new file mode 100644 index 000000000..9c467352f --- /dev/null +++ b/drivers/net/octeontx2/Makefile @@ -0,0 +1,30 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_octeontx2.a + +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2 +CFLAGS += -O3 + +EXPORT_MAP := rte_pmd_octeontx2_version.map + +LIBABIVER := 1 + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_ethdev.c + +LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build new file mode 100644 index 000000000..0d0ca32da --- /dev/null +++ b/drivers/net/octeontx2/meson.build @@ -0,0 +1,9 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +sources = files( + 'otx2_ethdev.c', + ) + +deps += ['common_octeontx2', 'mempool_octeontx2'] diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c new file mode 100644 index 000000000..d26535dee --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -0,0 +1,3 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map new file mode 100644 index 000000000..9a61188cd --- /dev/null +++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map @@ -0,0 +1,4 @@ +DPDK_19.08 { + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 2b5696a27..fab72ff6a 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -109,6 +109,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOO _LDLIBS-y += -lrte_common_octeontx endif OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) +OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) ifeq ($(findstring y,$(OCTEONTX2-y)),y) _LDLIBS-y += -lrte_common_octeontx2 endif @@ -195,6 +196,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2 _LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null +_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring From patchwork Sun Jun 30 18:05:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55678 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8BADE1B9A5; Sun, 30 Jun 2019 20:06:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 647041B965 for ; Sun, 30 Jun 2019 20:06:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI640p016301; Sun, 30 Jun 2019 11:06:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=jQsd30Vtwt5Gg7E5CPiupqRNPrFY4Rngb6ArUrSsgAE=; b=VKgHj8uZViMAtCuEQpAecdrN4+bTXgdcLI/s2ntu6bh00Hr+Tf68OOWuMj6eMdOxKBw1 U16QVdZRvyq3V3ni5YOTSxwNOxkffISFwaM8RBIa0pCDW9iWkDVZ2ROFnb5R7dJsj5gg DScGIe9uUaSJSYc+RARVuIfBaN/YeE4ZEmEBFDI/oIs5vV8tvJQkQYa0xiYyQz70Dw1u Y9OIxDhZwJQ+ylTfT+zQCmKb9KIwamgkL/1F+SnHch+Rl4cmvc6XzlSBxgSwHpvwgYb7 pUmySDLkLuQgcbgIV4tZquY3bJl4vtR/DrUilhFa689dfRTh/aaMDugt9fWVVc7o4TQw MQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y70-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:40 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:38 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:39 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 3609B3F703F; Sun, 30 Jun 2019 11:06:36 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "Anatoly Burakov" Date: Sun, 30 Jun 2019 23:35:14 +0530 Message-ID: <20190630180609.36705-3-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 02/57] net/octeontx2: add ethdev probe and remove X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob add basic PCIe ethdev probe and remove. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/Makefile | 8 ++- drivers/net/octeontx2/meson.build | 14 ++++- drivers/net/octeontx2/otx2_ethdev.c | 93 +++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++ 4 files changed, 140 insertions(+), 2 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_ethdev.h diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 9c467352f..b3060e2dd 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -15,6 +15,11 @@ CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2 CFLAGS += -O3 +ifneq ($(CONFIG_RTE_ARCH_64),y) +CFLAGS += -Wno-int-to-pointer-cast +CFLAGS += -Wno-pointer-to-int-cast +endif + EXPORT_MAP := rte_pmd_octeontx2_version.map LIBABIVER := 1 @@ -25,6 +30,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_ethdev.c -LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 +LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal +LDLIBS += -lrte_ethdev -lrte_bus_pci include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 0d0ca32da..db375f33b 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -6,4 +6,16 @@ sources = files( 'otx2_ethdev.c', ) -deps += ['common_octeontx2', 'mempool_octeontx2'] +deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2'] + +extra_flags = [] +# This integrated controller runs only on a arm64 machine, remove 32bit warnings +if not dpdk_conf.get('RTE_ARCH_64') + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] +endif + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index d26535dee..05fa8988e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1,3 +1,96 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(C) 2019 Marvell International Ltd. */ + +#include +#include +#include + +#include "otx2_ethdev.h" + +static int +otx2_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); + + return -ENODEV; +} + +static int +otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) +{ + RTE_SET_USED(eth_dev); + RTE_SET_USED(mbox_close); + + return -ENODEV; +} + +static int +nix_remove(struct rte_pci_device *pci_dev) +{ + struct rte_eth_dev *eth_dev; + int rc; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (eth_dev) { + /* Cleanup eth dev */ + rc = otx2_eth_dev_uninit(eth_dev, true); + if (rc) + return rc; + + rte_eth_dev_pci_release(eth_dev); + } + + /* Nothing to be done for secondary processes */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return 0; +} + +static int +nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + int rc; + + RTE_SET_USED(pci_drv); + + rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev), + otx2_eth_dev_init); + + /* On error on secondary, recheck if port exists in primary or + * in mid of detach state. + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc) + if (!rte_eth_dev_allocated(pci_dev->device.name)) + return 0; + return rc; +} + +static const struct rte_pci_id pci_nix_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF) + }, + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF) + }, + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_RVU_AF_VF) + }, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver pci_nix = { + .id_table = pci_nix_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA | + RTE_PCI_DRV_INTR_LSC, + .probe = nix_probe, + .remove = nix_remove, +}; + +RTE_PMD_REGISTER_PCI(net_octeontx2, pci_nix); +RTE_PMD_REGISTER_PCI_TABLE(net_octeontx2, pci_nix_map); +RTE_PMD_REGISTER_KMOD_DEP(net_octeontx2, "vfio-pci"); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h new file mode 100644 index 000000000..fd01a3254 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_ETHDEV_H__ +#define __OTX2_ETHDEV_H__ + +#include + +#include + +#include "otx2_common.h" +#include "otx2_dev.h" +#include "otx2_irq.h" +#include "otx2_mempool.h" + +struct otx2_eth_dev { + OTX2_DEV; /* Base class */ +} __rte_cache_aligned; + +static inline struct otx2_eth_dev * +otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) +{ + return eth_dev->data->dev_private; +} + +#endif /* __OTX2_ETHDEV_H__ */ From patchwork Sun Jun 30 18:05:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55679 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EAF0B1B9AC; Sun, 30 Jun 2019 20:06:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 8A7861B965 for ; Sun, 30 Jun 2019 20:06:44 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5hwI027703; Sun, 30 Jun 2019 11:06:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=prK8VEEaeLthSUhdCs/X/fcArm2VTaPOJxxYCVHthX0=; b=UW7HwwYe17YIwJjZ23tBDcCX6vAuGs5Mj/DchPsS0QEojuSysxNRMITTn3AN9rHrNNh9 9GlL+Ei+LmAePtwRfSjwW0J/9UzD2vjBjaS6BuQTl3/feouMpzeafnYWJ2ARmZM8YUK2 OAqfhAIu7HrPy79E5Ur6HV3xp8vLvidmclGVDj+jW5g3ReWiX7VQf1eUfpSnf2jbl32Z rbHFauR/Xwl23xn2okc+PfdNQpJVhG1EuSl9dJGnFnEFOPhzXlxjsmWVIPJysKgzajS9 WVSkRu6Mu2C9dP53U0nGA6VYSDx3nay5bzdrfvGURBm/+p0bZ3noW3kDXGmu0MnSMbDX PQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gdu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:43 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:42 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:42 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2AD033F703F; Sun, 30 Jun 2019 11:06:39 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "Anatoly Burakov" CC: Sunil Kumar Kori , Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:15 +0530 Message-ID: <20190630180609.36705-4-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 03/57] net/octeontx2: add device init and uninit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add basic init and uninit function which includes attaching LF device to probed PCIe device. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Sunil Kumar Kori Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 277 +++++++++++++++++++++++++++- drivers/net/octeontx2/otx2_ethdev.h | 72 ++++++++ drivers/net/octeontx2/otx2_mac.c | 72 ++++++++ 5 files changed, 418 insertions(+), 5 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_mac.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index b3060e2dd..4ff3609d2 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -28,6 +28,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_mac.c \ otx2_ethdev.c LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index db375f33b..b153f166d 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_mac.c', 'otx2_ethdev.c', ) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 05fa8988e..08f03b4c3 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -8,27 +8,277 @@ #include "otx2_ethdev.h" +static inline void +otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); +} + +static inline void +otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); +} + +static inline uint64_t +nix_get_rx_offload_capa(struct otx2_eth_dev *dev) +{ + uint64_t capa = NIX_RX_OFFLOAD_CAPA; + + if (otx2_dev_is_vf(dev)) + capa &= ~DEV_RX_OFFLOAD_TIMESTAMP; + + return capa; +} + +static inline uint64_t +nix_get_tx_offload_capa(struct otx2_eth_dev *dev) +{ + RTE_SET_USED(dev); + + return NIX_TX_OFFLOAD_CAPA; +} + +static int +nix_lf_free(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_lf_free_req *req; + struct ndc_sync_op *ndc_req; + int rc; + + /* Sync NDC-NIX for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->nix_lf_tx_sync = 1; + ndc_req->nix_lf_rx_sync = 1; + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc); + + req = otx2_mbox_alloc_msg_nix_lf_free(mbox); + /* Let AF driver free all this nix lf's + * NPC entries allocated using NPC MBOX. + */ + req->flags = 0; + + return otx2_mbox_process(mbox); +} + +static inline int +nix_lf_attach(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct rsrc_attach_req *req; + + /* Attach NIX(lf) */ + req = otx2_mbox_alloc_msg_attach_resources(mbox); + req->modify = true; + req->nixlf = true; + + return otx2_mbox_process(mbox); +} + +static inline int +nix_lf_get_msix_offset(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct msix_offset_rsp *msix_rsp; + int rc; + + /* Get NPA and NIX MSIX vector offsets */ + otx2_mbox_alloc_msg_msix_offset(mbox); + + rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); + + dev->nix_msixoff = msix_rsp->nix_msixoff; + + return rc; +} + +static inline int +otx2_eth_dev_lf_detach(struct otx2_mbox *mbox) +{ + struct rsrc_detach_req *req; + + req = otx2_mbox_alloc_msg_detach_resources(mbox); + + /* Detach all except npa lf */ + req->partial = true; + req->nixlf = true; + req->sso = true; + req->ssow = true; + req->timlfs = true; + req->cptlfs = true; + + return otx2_mbox_process(mbox); +} + static int otx2_eth_dev_init(struct rte_eth_dev *eth_dev) { - RTE_SET_USED(eth_dev); + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_pci_device *pci_dev; + int rc, max_entries; - return -ENODEV; + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + /* Setup callbacks for secondary process */ + otx2_eth_set_tx_function(eth_dev); + otx2_eth_set_rx_function(eth_dev); + return 0; + } + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + rte_eth_copy_pci_info(eth_dev, pci_dev); + eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE; + + /* Zero out everything after OTX2_DEV to allow proper dev_reset() */ + memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) - + offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start)); + + if (!dev->mbox_active) { + /* Initialize the base otx2_dev object + * only if already present + */ + rc = otx2_dev_init(pci_dev, dev); + if (rc) { + otx2_err("Failed to initialize otx2_dev rc=%d", rc); + goto error; + } + } + + /* Grab the NPA LF if required */ + rc = otx2_npa_lf_init(pci_dev, dev); + if (rc) + goto otx2_dev_uninit; + + dev->configured = 0; + dev->drv_inited = true; + dev->base = dev->bar2 + (RVU_BLOCK_ADDR_NIX0 << 20); + dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20); + + /* Attach NIX LF */ + rc = nix_lf_attach(dev); + if (rc) + goto otx2_npa_uninit; + + /* Get NIX MSIX offset */ + rc = nix_lf_get_msix_offset(dev); + if (rc) + goto otx2_npa_uninit; + + /* Get maximum number of supported MAC entries */ + max_entries = otx2_cgx_mac_max_entries_get(dev); + if (max_entries < 0) { + otx2_err("Failed to get max entries for mac addr"); + rc = -ENOTSUP; + goto mbox_detach; + } + + /* For VFs, returned max_entries will be 0. But to keep default MAC + * address, one entry must be allocated. So setting up to 1. + */ + if (max_entries == 0) + max_entries = 1; + + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries * + RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + otx2_err("Failed to allocate memory for mac addr"); + rc = -ENOMEM; + goto mbox_detach; + } + + dev->max_mac_entries = max_entries; + + rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr); + if (rc) + goto free_mac_addrs; + + /* Update the mac address */ + memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN); + + /* Also sync same MAC address to CGX table */ + otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]); + + dev->tx_offload_capa = nix_get_tx_offload_capa(dev); + dev->rx_offload_capa = nix_get_rx_offload_capa(dev); + + if (otx2_dev_is_A0(dev)) { + dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q; + dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL; + } + + otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64 + " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64, + eth_dev->data->port_id, dev->pf, dev->vf, + OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap, + dev->rx_offload_capa, dev->tx_offload_capa); + return 0; + +free_mac_addrs: + rte_free(eth_dev->data->mac_addrs); +mbox_detach: + otx2_eth_dev_lf_detach(dev->mbox); +otx2_npa_uninit: + otx2_npa_lf_fini(); +otx2_dev_uninit: + otx2_dev_fini(pci_dev, dev); +error: + otx2_err("Failed to init nix eth_dev rc=%d", rc); + return rc; } static int otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) { - RTE_SET_USED(eth_dev); - RTE_SET_USED(mbox_close); + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_pci_device *pci_dev; + int rc; - return -ENODEV; + /* Nothing to be done for secondary processes */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + rc = nix_lf_free(dev); + if (rc) + otx2_err("Failed to free nix lf, rc=%d", rc); + + rc = otx2_npa_lf_fini(); + if (rc) + otx2_err("Failed to cleanup npa lf, rc=%d", rc); + + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; + dev->drv_inited = false; + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + rc = otx2_eth_dev_lf_detach(dev->mbox); + if (rc) + otx2_err("Failed to detach resources, rc=%d", rc); + + /* Check if mbox close is needed */ + if (!mbox_close) + return 0; + + if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) { + /* Will be freed later by PMD */ + eth_dev->data->dev_private = NULL; + return 0; + } + + otx2_dev_fini(pci_dev, dev); + return 0; } static int nix_remove(struct rte_pci_device *pci_dev) { struct rte_eth_dev *eth_dev; + struct otx2_idev_cfg *idev; + struct otx2_dev *otx2_dev; int rc; eth_dev = rte_eth_dev_allocated(pci_dev->device.name); @@ -45,7 +295,24 @@ nix_remove(struct rte_pci_device *pci_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Check for common resources */ + idev = otx2_intra_dev_get_cfg(); + if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev) + return 0; + + otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf); + + if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev)) + goto exit; + + /* Safe to cleanup mbox as no more users */ + otx2_dev_fini(pci_dev, otx2_dev); + rte_free(otx2_dev); return 0; + +exit: + otx2_info("%s: common resource in use by other devices", pci_dev->name); + return -EAGAIN; } static int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index fd01a3254..d9f72686a 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -8,14 +8,76 @@ #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" #include "otx2_irq.h" #include "otx2_mempool.h" +#define OTX2_ETH_DEV_PMD_VERSION "1.0" + +/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */ + +/* Minimum CQ size should be 4K */ +#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63) +#define otx2_ethdev_fixup_is_min_4k_q(dev) \ + ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q) +/* Limit CQ being full */ +#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62) +#define otx2_ethdev_fixup_is_limit_cq_full(dev) \ + ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL) + +/* Used for struct otx2_eth_dev::flags */ +#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) + +#define NIX_TX_OFFLOAD_CAPA ( \ + DEV_TX_OFFLOAD_MBUF_FAST_FREE | \ + DEV_TX_OFFLOAD_MT_LOCKFREE | \ + DEV_TX_OFFLOAD_VLAN_INSERT | \ + DEV_TX_OFFLOAD_QINQ_INSERT | \ + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \ + DEV_TX_OFFLOAD_TCP_CKSUM | \ + DEV_TX_OFFLOAD_UDP_CKSUM | \ + DEV_TX_OFFLOAD_SCTP_CKSUM | \ + DEV_TX_OFFLOAD_MULTI_SEGS | \ + DEV_TX_OFFLOAD_IPV4_CKSUM) + +#define NIX_RX_OFFLOAD_CAPA ( \ + DEV_RX_OFFLOAD_CHECKSUM | \ + DEV_RX_OFFLOAD_SCTP_CKSUM | \ + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \ + DEV_RX_OFFLOAD_SCATTER | \ + DEV_RX_OFFLOAD_JUMBO_FRAME | \ + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + DEV_RX_OFFLOAD_VLAN_STRIP | \ + DEV_RX_OFFLOAD_VLAN_FILTER | \ + DEV_RX_OFFLOAD_QINQ_STRIP | \ + DEV_RX_OFFLOAD_TIMESTAMP) + struct otx2_eth_dev { OTX2_DEV; /* Base class */ + MARKER otx2_eth_dev_data_start; + uint16_t sqb_size; + uint16_t rx_chan_base; + uint16_t tx_chan_base; + uint8_t rx_chan_cnt; + uint8_t tx_chan_cnt; + uint8_t lso_tsov4_idx; + uint8_t lso_tsov6_idx; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t max_mac_entries; + uint8_t configured; + uint16_t nix_msixoff; + uintptr_t base; + uintptr_t lmt_addr; + uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */ + uint64_t rx_offloads; + uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */ + uint64_t tx_offloads; + uint64_t rx_offload_capa; + uint64_t tx_offload_capa; } __rte_cache_aligned; static inline struct otx2_eth_dev * @@ -24,4 +86,14 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) return eth_dev->data->dev_private; } +/* CGX */ +int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); +int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); +int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, + struct rte_ether_addr *addr); + +/* Mac address handling */ +int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr); +int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c new file mode 100644 index 000000000..89b0ca6b0 --- /dev/null +++ b/drivers/net/octeontx2/otx2_mac.c @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_dev.h" +#include "otx2_ethdev.h" + +int +otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_mac_addr_set_or_get *req; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + if (otx2_dev_active_vfs(dev)) + return -ENOTSUP; + + req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox); + otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Failed to set mac address in CGX, rc=%d", rc); + + return 0; +} + +int +otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev) +{ + struct cgx_max_dmac_entries_get_rsp *rsp; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + return rsp->max_dmac_filters; +} + +int +otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_get_mac_addr_rsp *rsp; + int rc; + + otx2_mbox_alloc_msg_nix_get_mac_addr(mbox); + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc) { + otx2_err("Failed to get mac address, rc=%d", rc); + goto done; + } + + otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN); + +done: + return rc; +} From patchwork Sun Jun 30 18:05:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55680 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6FE551B9B2; Sun, 30 Jun 2019 20:06:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id A4D871B965 for ; Sun, 30 Jun 2019 20:06:48 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI50uC015649; Sun, 30 Jun 2019 11:06:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=mQJlPAX4YGCuWH6wLTEco+MxDQqDE9wQBLSoWZhpfEw=; b=N2Gfuf6DWU8XLhceI98WUslFwd91eHF3QhHmCtQsx5It2ItuZ/FgXT6cQr9sGJw0UHM3 f+vOdP0br63r3sUdqqwP3Ey9kWKZtsGzUHFH9P9MG+WUNS14AHULmG945W6/naK2CC2G 1i/lNmiDTmSfYOmQnSCE79GBKiOmyFREgkpQONN8AH1Ct60rYrIxHMWB0Iy+cpbBFtep jlb6eQNccFdUN7gH1+QZe7pmQtBlxAkkXB3xcH9tUKDxvvYjxykVkxtHq9nX+yxjc3Su ReGt+/HByuucWA+EzcVMkFfSeTyL3G/KgIeW1s0AZ4PoRuaT4JoThmSMabPw6Tq0MVVI bA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y7g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:48 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:46 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A90B13F703F; Sun, 30 Jun 2019 11:06:43 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic CC: Pavan Nikhilesh , Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:16 +0530 Message-ID: <20190630180609.36705-5-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 04/57] net/octeontx2: add devargs parsing functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob add various devargs command line options supported by this driver. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- doc/guides/nics/octeontx2.rst | 67 ++++++++ drivers/net/octeontx2/Makefile | 5 +- drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 7 + drivers/net/octeontx2/otx2_ethdev.h | 23 +++ drivers/net/octeontx2/otx2_ethdev_devargs.c | 165 ++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 10 ++ 7 files changed, 276 insertions(+), 2 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c create mode 100644 drivers/net/octeontx2/otx2_rx.h diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index f0bd36be3..92a7ebc42 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -30,3 +30,70 @@ The following options may be modified in the ``config`` file. - ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``) Toggle compilation of the ``librte_pmd_octeontx2`` driver. + +Runtime Config Options +---------------------- + +- ``HW offload ptype parsing disable`` (default ``0``) + + Packet type parsing is HW offloaded by default and this feature may be toggled + using ``ptype_disable`` ``devargs`` parameter. + +- ``Rx&Tx scalar mode enable`` (default ``0``) + + Ethdev supports both scalar and vector mode, it may be selected at runtime + using ``scalar_enable`` ``devargs`` parameter. + +- ``RSS reta size`` (default ``64``) + + RSS redirection table size may be configured during runtime using ``reta_size`` + ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,reta_size=256 + + With the above configuration, reta table of size 256 is populated. + +- ``Flow priority levels`` (default ``3``) + + RTE Flow priority levels can be configured during runtime using + ``flow_max_priority`` ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,flow_max_priority=10 + + With the above configuration, priority level was set to 10 (0-9). Max + priority level supported is 32. + +- ``Reserve Flow entries`` (default ``8``) + + RTE flow entries can be pre allocated and the size of pre allocation can be + selected runtime using ``flow_prealloc_size`` ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,flow_prealloc_size=4 + + With the above configuration, pre alloc size was set to 4. Max pre alloc + size supported is 32. + +- ``Max SQB buffer count`` (default ``512``) + + Send queue descriptor buffer count may be limited during runtime using + ``max_sqb_count`` ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,max_sqb_count=64 + + With the above configuration, each send queue's decscriptor buffer count is + limited to a maximum of 64 buffers. + + +.. note:: + + Above devarg parameters are configurable per device, user needs to pass the + parameters to all the PCIe devices if application requires to configure on + all the ethdev ports. diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 4ff3609d2..d1c8871d8 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -29,9 +29,10 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ - otx2_ethdev.c + otx2_ethdev.c \ + otx2_ethdev_devargs.c LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal -LDLIBS += -lrte_ethdev -lrte_bus_pci +LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index b153f166d..b5c6fb978 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files( 'otx2_mac.c', 'otx2_ethdev.c', + 'otx2_ethdev_devargs.c' ) deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2'] diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 08f03b4c3..eeba0c2c6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -137,6 +137,13 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) - offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start)); + /* Parse devargs string */ + rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev); + if (rc) { + otx2_err("Failed to parse devargs rc=%d", rc); + goto error; + } + if (!dev->mbox_active) { /* Initialize the base otx2_dev object * only if already present diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index d9f72686a..a83688392 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -9,11 +9,13 @@ #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" #include "otx2_irq.h" #include "otx2_mempool.h" +#include "otx2_rx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -31,6 +33,10 @@ /* Used for struct otx2_eth_dev::flags */ #define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) +#define NIX_MAX_SQB 512 +#define NIX_MIN_SQB 32 +#define NIX_RSS_RETA_SIZE 64 + #define NIX_TX_OFFLOAD_CAPA ( \ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \ DEV_TX_OFFLOAD_MT_LOCKFREE | \ @@ -56,6 +62,15 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +struct otx2_rss_info { + uint16_t rss_size; +}; + +struct otx2_npc_flow_info { + uint16_t flow_prealloc_size; + uint16_t flow_max_priority; +}; + struct otx2_eth_dev { OTX2_DEV; /* Base class */ MARKER otx2_eth_dev_data_start; @@ -72,12 +87,16 @@ struct otx2_eth_dev { uint16_t nix_msixoff; uintptr_t base; uintptr_t lmt_addr; + uint16_t scalar_ena; + uint16_t max_sqb_count; uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */ uint64_t rx_offloads; uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */ uint64_t tx_offloads; uint64_t rx_offload_capa; uint64_t tx_offload_capa; + struct otx2_rss_info rss_info; + struct otx2_npc_flow_info npc_flow; } __rte_cache_aligned; static inline struct otx2_eth_dev * @@ -96,4 +115,8 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr); int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); +/* Devargs */ +int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, + struct otx2_eth_dev *dev); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c new file mode 100644 index 000000000..85e7e312a --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c @@ -0,0 +1,165 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" + +static int +parse_flow_max_priority(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint16_t val; + + val = atoi(value); + + /* Limit the max priority to 32 */ + if (val < 1 || val > 32) + return -EINVAL; + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_flow_prealloc_size(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint16_t val; + + val = atoi(value); + + /* Limit the prealloc size to 32 */ + if (val < 1 || val > 32) + return -EINVAL; + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_reta_size(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + + if (val <= ETH_RSS_RETA_SIZE_64) + val = ETH_RSS_RETA_SIZE_64; + else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128) + val = ETH_RSS_RETA_SIZE_128; + else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256) + val = ETH_RSS_RETA_SIZE_256; + else + val = NIX_RSS_RETA_SIZE; + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_ptype_flag(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val) + val = 0; /* Disable NIX_RX_OFFLOAD_PTYPE_F */ + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_flag(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + + *(uint16_t *)extra_args = atoi(value); + + return 0; +} + +static int +parse_sqb_count(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + + if (val < NIX_MIN_SQB || val > NIX_MAX_SQB) + return -EINVAL; + + *(uint16_t *)extra_args = val; + + return 0; +} + +#define OTX2_RSS_RETA_SIZE "reta_size" +#define OTX2_PTYPE_DISABLE "ptype_disable" +#define OTX2_SCL_ENABLE "scalar_enable" +#define OTX2_MAX_SQB_COUNT "max_sqb_count" +#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size" +#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority" + +int +otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) +{ + uint16_t offload_flag = NIX_RX_OFFLOAD_PTYPE_F; + uint16_t rss_size = NIX_RSS_RETA_SIZE; + uint16_t sqb_count = NIX_MAX_SQB; + uint16_t flow_prealloc_size = 8; + uint16_t flow_max_priority = 3; + uint16_t scalar_enable = 0; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto null_devargs; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, OTX2_PTYPE_DISABLE, + &parse_ptype_flag, &offload_flag); + rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE, + &parse_reta_size, &rss_size); + rte_kvargs_process(kvlist, OTX2_SCL_ENABLE, + &parse_flag, &scalar_enable); + rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT, + &parse_sqb_count, &sqb_count); + rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE, + &parse_flow_prealloc_size, &flow_prealloc_size); + rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY, + &parse_flow_max_priority, &flow_max_priority); + rte_kvargs_free(kvlist); + +null_devargs: + dev->rx_offload_flags = offload_flag; + dev->scalar_ena = scalar_enable; + dev->max_sqb_count = sqb_count; + dev->rss_info.rss_size = rss_size; + dev->npc_flow.flow_prealloc_size = flow_prealloc_size; + dev->npc_flow.flow_max_priority = flow_max_priority; + return 0; + +exit: + return -EINVAL; +} + +RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2, + OTX2_RSS_RETA_SIZE "=<64|128|256>" + OTX2_PTYPE_DISABLE "=1" + OTX2_SCL_ENABLE "=1" + OTX2_MAX_SQB_COUNT "=<32-512>" + OTX2_FLOW_PREALLOC_SIZE "=<1-32>" + OTX2_FLOW_MAX_PRIORITY "=<1-32>"); diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h new file mode 100644 index 000000000..1749c43ff --- /dev/null +++ b/drivers/net/octeontx2/otx2_rx.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_RX_H__ +#define __OTX2_RX_H__ + +#define NIX_RX_OFFLOAD_PTYPE_F BIT(1) + +#endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 30 18:05:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55681 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B31361B9BF; Sun, 30 Jun 2019 20:07:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1D1F91B965 for ; Sun, 30 Jun 2019 20:06:51 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6oeq016863 for ; Sun, 30 Jun 2019 11:06:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=zwqI67i+mJn+cCwCeOeVxQSEN1IEYi4BZ6v/HEDYscs=; b=zROtV40x8NY9PbD6UOPX+N4ntZAXwPWSDUoEnu5VOOVLUQZSOV7fhcs6f9EYB/WBIanu 8t45cRVpkKPkjZ7Kdz8qeKdTtadV0UP8P/T4KDCVQZQ4G6BkzqIStH03u7sadgauZ874 vP5kVm6srDGp0HiSE0AN/onCkzBSibJQV12KMv80E3PMR6s/VEXbLdhxtXQkJvi27Y5q 6JJD4BnPz04bpPt5M4MESEtW63GUKC98UNAXKFb9u0rSr+k2XubgkpGyHSFQNL+X09fr BPP+I4Zbffha/P+f0fcJ7MGaM0Ftlsp2vCghk5sl2JAj0fa/Ude8UOEex9E8fyym57MN 5w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y7k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:06:50 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:48 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:48 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 29D003F703F; Sun, 30 Jun 2019 11:06:46 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Harman Kalra Date: Sun, 30 Jun 2019 23:35:17 +0530 Message-ID: <20190630180609.36705-6-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 05/57] net/octeontx2: handle device error interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Handle device specific error and ras interrupts. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Harman Kalra --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 12 +- drivers/net/octeontx2/otx2_ethdev.h | 4 + drivers/net/octeontx2/otx2_ethdev_irq.c | 140 ++++++++++++++++++++++++ 5 files changed, 156 insertions(+), 2 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index d1c8871d8..54f8f268d 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_ethdev.c \ + otx2_ethdev_irq.c \ otx2_ethdev_devargs.c LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index b5c6fb978..148f7d339 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files( 'otx2_mac.c', 'otx2_ethdev.c', + 'otx2_ethdev_irq.c', 'otx2_ethdev_devargs.c' ) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index eeba0c2c6..67a7ebb36 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -175,12 +175,17 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) if (rc) goto otx2_npa_uninit; + /* Register LF irq handlers */ + rc = otx2_nix_register_irqs(eth_dev); + if (rc) + goto mbox_detach; + /* Get maximum number of supported MAC entries */ max_entries = otx2_cgx_mac_max_entries_get(dev); if (max_entries < 0) { otx2_err("Failed to get max entries for mac addr"); rc = -ENOTSUP; - goto mbox_detach; + goto unregister_irq; } /* For VFs, returned max_entries will be 0. But to keep default MAC @@ -194,7 +199,7 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) if (eth_dev->data->mac_addrs == NULL) { otx2_err("Failed to allocate memory for mac addr"); rc = -ENOMEM; - goto mbox_detach; + goto unregister_irq; } dev->max_mac_entries = max_entries; @@ -226,6 +231,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) free_mac_addrs: rte_free(eth_dev->data->mac_addrs); +unregister_irq: + otx2_nix_unregister_irqs(eth_dev); mbox_detach: otx2_eth_dev_lf_detach(dev->mbox); otx2_npa_uninit: @@ -261,6 +268,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) dev->drv_inited = false; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + otx2_nix_unregister_irqs(eth_dev); rc = otx2_eth_dev_lf_detach(dev->mbox); if (rc) diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index a83688392..f7d8838df 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -105,6 +105,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) return eth_dev->data->dev_private; } +/* IRQ */ +int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); +void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c new file mode 100644 index 000000000..33fed93c4 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include + +#include "otx2_ethdev.h" + +static void +nix_lf_err_irq(void *param) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_ERR_INT); + if (intr == 0) + return; + + otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_ERR_INT); +} + +static int +nix_lf_register_err_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec); + /* Enable all dev interrupt except for RQ_DISABLED */ + otx2_write64(~BIT_ULL(11), dev->base + NIX_LF_ERR_INT_ENA_W1S); + + return rc; +} + +static void +nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C); + otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec); +} + +static void +nix_lf_ras_irq(void *param) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_RAS); + if (intr == 0) + return; + + otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_RAS); +} + +static int +nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec); + /* Enable dev interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S); + + return rc; +} + +static void +nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C); + otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec); +} + +int +otx2_nix_register_irqs(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + if (dev->nix_msixoff == MSIX_VECTOR_INVALID) { + otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x", + dev->nix_msixoff); + return -EINVAL; + } + + /* Register lf err interrupt */ + rc = nix_lf_register_err_irq(eth_dev); + /* Register RAS interrupt */ + rc |= nix_lf_register_ras_irq(eth_dev); + + return rc; +} + +void +otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev) +{ + nix_lf_unregister_err_irq(eth_dev); + nix_lf_unregister_ras_irq(eth_dev); +} From patchwork Sun Jun 30 18:05:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55682 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2A9641B9AD; Sun, 30 Jun 2019 20:07:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 82E311B9A9 for ; Sun, 30 Jun 2019 20:06:54 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI54D2015680; Sun, 30 Jun 2019 11:06:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Td8yZOrJ9+6QLUrWiTx65MoKtqwNWQehvCZGmdGJ20w=; b=LC7G94rAUgjehhUr3XAVLywzw/KhggGF+d8D6O3AB4lkdbZv1TQJuoIrVi208Lpouigo neLzzCZQwKAMctC0nKzY+WejFbi9WnqtLmk3QV4dSmUOAcQbQk1PlwGD5c7bbqpNMCAR QNe9y+pEVpHO6Cpn1oYTLXrGiW3V/r3nAwKRktaOLcv99gqvT/JgSaAE/GyUETx12863 bxdykW7MJ4sQUkVZhKnCLK839OiWJ0X6ktWA32dgvPQZ11xXvLP3jeBBlKK9p6YR0FiF HhqyHh4kZFbG8cchNWlGgB2oyfBdPwOmQ/vlCsJG1MVGkYhtEXFwezIhhMP+h/VFeYMc kA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y7r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:53 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:52 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C45D73F703F; Sun, 30 Jun 2019 11:06:49 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru , Harman Kalra Date: Sun, 30 Jun 2019 23:35:18 +0530 Message-ID: <20190630180609.36705-7-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 06/57] net/octeontx2: add info get operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add device information get operation. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 4 ++ doc/guides/nics/features/octeontx2_vec.ini | 4 ++ doc/guides/nics/features/octeontx2_vf.ini | 3 + doc/guides/nics/octeontx2.rst | 2 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 7 +++ drivers/net/octeontx2/otx2_ethdev.h | 45 +++++++++++++++ drivers/net/octeontx2/otx2_ethdev_ops.c | 64 ++++++++++++++++++++++ 9 files changed, 131 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 84d5ad779..356b88de7 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -4,6 +4,10 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Speed capabilities = Y +Lock-free Tx queue = Y +SR-IOV = Y +Multiprocess aware = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 5fd7e4c5c..5f4eaa3f4 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -4,6 +4,10 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Speed capabilities = Y +Lock-free Tx queue = Y +SR-IOV = Y +Multiprocess aware = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 3128cc120..024b032d4 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -4,6 +4,9 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Speed capabilities = Y +Lock-free Tx queue = Y +Multiprocess aware = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 92a7ebc42..e3f4c2c43 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -16,6 +16,8 @@ Features Features of the OCTEON TX2 Ethdev PMD are: +- SR-IOV VF +- Lock-free Tx queue Prerequisites ------------- diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 54f8f268d..5083637e4 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -31,6 +31,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ + otx2_ethdev_ops.c \ otx2_ethdev_devargs.c LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 148f7d339..aa8417e3f 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -6,6 +6,7 @@ sources = files( 'otx2_mac.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', + 'otx2_ethdev_ops.c', 'otx2_ethdev_devargs.c' ) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 67a7ebb36..6e3c70559 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -64,6 +64,11 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +/* Initialize and register driver with DPDK Application */ +static const struct eth_dev_ops otx2_eth_dev_ops = { + .dev_infos_get = otx2_nix_info_get, +}; + static inline int nix_lf_attach(struct otx2_eth_dev *dev) { @@ -120,6 +125,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) struct rte_pci_device *pci_dev; int rc, max_entries; + eth_dev->dev_ops = &otx2_eth_dev_ops; + /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { /* Setup callbacks for secondary process */ diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index f7d8838df..666ceba91 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -33,9 +33,50 @@ /* Used for struct otx2_eth_dev::flags */ #define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) +/* VLAN tag inserted by NIX_TX_VTAG_ACTION. + * In Tx space is always reserved for this in FRS. + */ +#define NIX_MAX_VTAG_INS 2 +#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS) + +/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */ +#define NIX_L2_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8) + +/* HW config of frame size doesn't include FCS */ +#define NIX_MAX_HW_FRS 9212 +#define NIX_MIN_HW_FRS 60 + +/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */ +#define NIX_MAX_FRS \ + (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE) + +#define NIX_MIN_FRS \ + (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN) + +#define NIX_MAX_MTU \ + (NIX_MAX_FRS - NIX_L2_OVERHEAD) + #define NIX_MAX_SQB 512 #define NIX_MIN_SQB 32 +#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */ #define NIX_RSS_RETA_SIZE 64 +#define NIX_RX_MIN_DESC 16 +#define NIX_RX_MIN_DESC_ALIGN 16 +#define NIX_RX_NB_SEG_MAX 6 + +/* If PTP is enabled additional SEND MEM DESC is required which + * takes 2 words, hence max 7 iova address are possible + */ +#if defined(RTE_LIBRTE_IEEE1588) +#define NIX_TX_NB_SEG_MAX 7 +#else +#define NIX_TX_NB_SEG_MAX 9 +#endif + +#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\ + ETH_RSS_TCP | ETH_RSS_SCTP | \ + ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD) #define NIX_TX_OFFLOAD_CAPA ( \ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \ @@ -105,6 +146,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) return eth_dev->data->dev_private; } +/* Ops */ +void otx2_nix_info_get(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_info *dev_info); + /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c new file mode 100644 index 000000000..df7e909d2 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +void +otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + devinfo->min_rx_bufsize = NIX_MIN_FRS; + devinfo->max_rx_pktlen = NIX_MAX_FRS; + devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT; + devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT; + devinfo->max_mac_addrs = dev->max_mac_entries; + devinfo->max_vfs = pci_dev->max_vfs; + devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD; + devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD; + + devinfo->rx_offload_capa = dev->rx_offload_capa; + devinfo->tx_offload_capa = dev->tx_offload_capa; + devinfo->rx_queue_offload_capa = 0; + devinfo->tx_queue_offload_capa = 0; + + devinfo->reta_size = dev->rss_info.rss_size; + devinfo->hash_key_size = NIX_HASH_KEY_SIZE; + devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD; + + devinfo->default_rxconf = (struct rte_eth_rxconf) { + .rx_drop_en = 0, + .offloads = 0, + }; + + devinfo->default_txconf = (struct rte_eth_txconf) { + .offloads = 0, + }; + + devinfo->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = UINT16_MAX, + .nb_min = NIX_RX_MIN_DESC, + .nb_align = NIX_RX_MIN_DESC_ALIGN, + .nb_seg_max = NIX_RX_NB_SEG_MAX, + .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX, + }; + devinfo->rx_desc_lim.nb_max = + RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max, + NIX_RX_MIN_DESC_ALIGN); + + devinfo->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = UINT16_MAX, + .nb_min = 1, + .nb_align = 1, + .nb_seg_max = NIX_TX_NB_SEG_MAX, + .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX, + }; + + /* Auto negotiation disabled */ + devinfo->speed_capa = ETH_LINK_SPEED_FIXED; + devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G | + ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G | + ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G; +} From patchwork Sun Jun 30 18:05:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55683 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9AB191B9C7; Sun, 30 Jun 2019 20:07:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D46961B9AF for ; Sun, 30 Jun 2019 20:06:56 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6dmM028335 for ; Sun, 30 Jun 2019 11:06:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=fC0ggjr4Qm4/u3sI7oggJg9YFvqTwX8PoYlPOT49MSU=; b=G/9MfPqC7Iw7BcpkVdftCWCKzyJOCqUNaDLV62B7aqIrSQv1GWRJM7EXDimWkcq6HBhU 46zOF7P0QQekbTdgwsgQKvMnYjjpCZz824bDpTjF8Ano2k0qp+k6NpIqoaOLeZ+p6doI 3S+RSAoK0Tmf1IUV0Sw8B60u4LWHg7VEIT9BjlksgtuuPQmw2ZbtBpXVfQsyPisceh9d MQSD0ofR0q7KZl7g8/NYPJFZUDH/y5bU6lCdZCKMTHRxNqx2hAkwIkJ18QJVO5ewm67N Es/HXCpqm0WORjXwwvvZeTceOy2xseWFG7xSWgYC/+v4jKYs8JuANpKVpjNZs8U52llj FQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4geb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:06:56 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:54 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:54 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 35C3A3F7040; Sun, 30 Jun 2019 11:06:52 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:19 +0530 Message-ID: <20190630180609.36705-8-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 07/57] net/octeontx2: add device configure operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add device configure operation. This would call lf_alloc mailbox to allocate a NIX LF and upon return, AF will return the attributes for the select LF. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 151 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 11 ++ 2 files changed, 162 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 6e3c70559..65d72a47f 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -39,6 +39,52 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev) return NIX_TX_OFFLOAD_CAPA; } +static int +nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_lf_alloc_req *req; + struct nix_lf_alloc_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox); + req->rq_cnt = nb_rxq; + req->sq_cnt = nb_txq; + req->cq_cnt = nb_rxq; + /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */ + RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128); + req->xqe_sz = NIX_XQESZ_W16; + req->rss_sz = dev->rss_info.rss_size; + req->rss_grps = NIX_RSS_GRPS; + req->npa_func = otx2_npa_pf_func_get(); + req->sso_func = otx2_sso_pf_func_get(); + req->rx_cfg = BIT_ULL(35 /* DIS_APAD */); + if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM)) { + req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */); + req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */); + } + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + dev->sqb_size = rsp->sqb_size; + dev->tx_chan_base = rsp->tx_chan_base; + dev->rx_chan_base = rsp->rx_chan_base; + dev->rx_chan_cnt = rsp->rx_chan_cnt; + dev->tx_chan_cnt = rsp->tx_chan_cnt; + dev->lso_tsov4_idx = rsp->lso_tsov4_idx; + dev->lso_tsov6_idx = rsp->lso_tsov6_idx; + dev->lf_tx_stats = rsp->lf_tx_stats; + dev->lf_rx_stats = rsp->lf_rx_stats; + dev->cints = rsp->cints; + dev->qints = rsp->qints; + dev->npc_flow.channel = dev->rx_chan_base; + + return 0; +} + static int nix_lf_free(struct otx2_eth_dev *dev) { @@ -64,9 +110,114 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +static int +otx2_nix_configure(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_eth_conf *conf = &data->dev_conf; + struct rte_eth_rxmode *rxmode = &conf->rxmode; + struct rte_eth_txmode *txmode = &conf->txmode; + char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE]; + struct rte_ether_addr *ea; + uint8_t nb_rxq, nb_txq; + int rc; + + rc = -EINVAL; + + /* Sanity checks */ + if (rte_eal_has_hugepages() == 0) { + otx2_err("Huge page is not configured"); + goto fail; + } + + if (rte_eal_iova_mode() != RTE_IOVA_VA) { + otx2_err("iova mode should be va"); + goto fail; + } + + if (conf->link_speeds & ETH_LINK_SPEED_FIXED) { + otx2_err("Setting link speed/duplex not supported"); + goto fail; + } + + if (conf->dcb_capability_en == 1) { + otx2_err("dcb enable is not supported"); + goto fail; + } + + if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) { + otx2_err("Flow director is not supported"); + goto fail; + } + + if (rxmode->mq_mode != ETH_MQ_RX_NONE && + rxmode->mq_mode != ETH_MQ_RX_RSS) { + otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode); + goto fail; + } + + if (txmode->mq_mode != ETH_MQ_TX_NONE) { + otx2_err("Unsupported mq tx mode %d", txmode->mq_mode); + goto fail; + } + + /* Free the resources allocated from the previous configure */ + if (dev->configured == 1) + nix_lf_free(dev); + + if (otx2_dev_is_A0(dev) && + (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) && + ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) || + (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) { + otx2_err("Outer IP and SCTP checksum unsupported"); + rc = -EINVAL; + goto fail; + } + + dev->rx_offloads = rxmode->offloads; + dev->tx_offloads = txmode->offloads; + dev->rss_info.rss_grps = NIX_RSS_GRPS; + + nb_rxq = RTE_MAX(data->nb_rx_queues, 1); + nb_txq = RTE_MAX(data->nb_tx_queues, 1); + + /* Alloc a nix lf */ + rc = nix_lf_alloc(dev, nb_rxq, nb_txq); + if (rc) { + otx2_err("Failed to init nix_lf rc=%d", rc); + goto fail; + } + + /* Update the mac address */ + ea = eth_dev->data->mac_addrs; + memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); + if (rte_is_zero_ether_addr(ea)) + rte_eth_random_addr((uint8_t *)ea); + + rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea); + + otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d" + " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 "" + " rx_flags=0x%x tx_flags=0x%x", + eth_dev->data->port_id, ea_fmt, nb_rxq, + nb_txq, dev->rx_offloads, dev->tx_offloads, + dev->rx_offload_flags, dev->tx_offload_flags); + + /* All good */ + dev->configured = 1; + dev->configured_nb_rx_qs = data->nb_rx_queues; + dev->configured_nb_tx_qs = data->nb_tx_queues; + return 0; + +fail: + return rc; +} + /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, + .dev_configure = otx2_nix_configure, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 666ceba91..c1528e2ac 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -59,11 +59,14 @@ #define NIX_MAX_SQB 512 #define NIX_MIN_SQB 32 +/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/ +#define NIX_RSS_GRPS 8 #define NIX_HASH_KEY_SIZE 48 /* 352 Bits */ #define NIX_RSS_RETA_SIZE 64 #define NIX_RX_MIN_DESC 16 #define NIX_RX_MIN_DESC_ALIGN 16 #define NIX_RX_NB_SEG_MAX 6 +#define NIX_CQ_ENTRY_SZ 128 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -105,9 +108,11 @@ struct otx2_rss_info { uint16_t rss_size; + uint8_t rss_grps; }; struct otx2_npc_flow_info { + uint16_t channel; /*rx channel */ uint16_t flow_prealloc_size; uint16_t flow_max_priority; }; @@ -124,7 +129,13 @@ struct otx2_eth_dev { uint8_t lso_tsov6_idx; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t max_mac_entries; + uint8_t lf_tx_stats; + uint8_t lf_rx_stats; + uint16_t cints; + uint16_t qints; uint8_t configured; + uint8_t configured_nb_rx_qs; + uint8_t configured_nb_tx_qs; uint16_t nix_msixoff; uintptr_t base; uintptr_t lmt_addr; From patchwork Sun Jun 30 18:05:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55684 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D89A81B9CC; Sun, 30 Jun 2019 20:07:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BB2F61B9AB for ; Sun, 30 Jun 2019 20:06:59 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5DtH027158; Sun, 30 Jun 2019 11:06:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=5mSMfWErHElWYnV3VxsM9wSUnfg8MQAMvhHaZDlBw2I=; b=bMKv8a5fGwO/lIcj9n0OCNmFJhCt//jYEB/U50UWDoMMzG31rTeiaE79zT4rcb3yJ5P+ mHhc+HBND0PSR+h5Z6EdwDspp6pfvb70R8HZ3lQ9k9+DDEcDPC/wU4yWhAm5sWDF/SfH aeTlsBTAbnsFJgXqlPEBJ+galDmPgADj1BRc5AHS01lZQ7ZLN8YBp6z/XMpdVF/vl8y3 ZJI7jJYXIfxHZ+W2dG7sk1ZO7BeX9EKWhDJTNpin/Hew9AucVCrFIdlbHaSEXES9dPNC lVLISM6CH2ZTnw/9bkORtxxsc9k3CF9mD3BTQe6M4Lmn/rlotOVw/UlY6OsNfrSgj27T fA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gej-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:06:58 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:06:57 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:06:57 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DBCD13F703F; Sun, 30 Jun 2019 11:06:55 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic Date: Sun, 30 Jun 2019 23:35:20 +0530 Message-ID: <20190630180609.36705-9-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 08/57] net/octeontx2: handle queue specific error interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Handle queue specific error interrupts. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 16 +- drivers/net/octeontx2/otx2_ethdev.h | 9 ++ drivers/net/octeontx2/otx2_ethdev_irq.c | 191 ++++++++++++++++++++++++ 4 files changed, 216 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index e3f4c2c43..50e825968 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - SR-IOV VF - Lock-free Tx queue +- Debug utilities - error interrupt support Prerequisites ------------- diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 65d72a47f..045855c2e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -163,8 +163,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) } /* Free the resources allocated from the previous configure */ - if (dev->configured == 1) + if (dev->configured == 1) { + oxt2_nix_unregister_queue_irqs(eth_dev); nix_lf_free(dev); + } if (otx2_dev_is_A0(dev) && (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) && @@ -189,6 +191,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto fail; } + /* Register queue IRQs */ + rc = oxt2_nix_register_queue_irqs(eth_dev); + if (rc) { + otx2_err("Failed to register queue interrupts rc=%d", rc); + goto free_nix_lf; + } + /* Update the mac address */ ea = eth_dev->data->mac_addrs; memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); @@ -210,6 +219,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) dev->configured_nb_tx_qs = data->nb_tx_queues; return 0; +free_nix_lf: + rc = nix_lf_free(dev); fail: return rc; } @@ -413,6 +424,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Unregister queue irqs */ + oxt2_nix_unregister_queue_irqs(eth_dev); + rc = nix_lf_free(dev); if (rc) otx2_err("Failed to free nix lf, rc=%d", rc); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index c1528e2ac..d9cdd33b5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -106,6 +106,11 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +struct otx2_qint { + struct rte_eth_dev *eth_dev; + uint8_t qintx; +}; + struct otx2_rss_info { uint16_t rss_size; uint8_t rss_grps; @@ -134,6 +139,7 @@ struct otx2_eth_dev { uint16_t cints; uint16_t qints; uint8_t configured; + uint8_t configured_qints; uint8_t configured_nb_rx_qs; uint8_t configured_nb_tx_qs; uint16_t nix_msixoff; @@ -147,6 +153,7 @@ struct otx2_eth_dev { uint64_t tx_offloads; uint64_t rx_offload_capa; uint64_t tx_offload_capa; + struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; struct otx2_npc_flow_info npc_flow; } __rte_cache_aligned; @@ -163,7 +170,9 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); +int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); +void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 33fed93c4..476c7ea78 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -112,6 +112,197 @@ nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev) otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec); } +static inline uint8_t +nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q, + uint32_t off, uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + otx2_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + otx2_write64(wdata, dev->base + off); + + return qint; +} + +static inline uint8_t +nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq) +{ + return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq) +{ + return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq) +{ + return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00); +} + +static inline void +nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off) +{ + uint64_t reg; + + reg = otx2_read64(dev->base + off); + if (reg & BIT_ULL(44)) + otx2_err("SQ=%d err_code=0x%x", + (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff)); +} + +static void +nix_lf_q_irq(void *param) +{ + struct otx2_qint *qint = (struct otx2_qint *)param; + struct rte_eth_dev *eth_dev = qint->eth_dev; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint8_t irq, qintx = qint->qintx; + int q, cq, rq, sq; + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d", + intr, qintx, dev->pf, dev->vf); + + /* Handle RQ interrupts */ + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) { + rq = q % dev->qints; + irq = nix_lf_rq_irq_get_and_clear(dev, rq); + + if (irq & BIT_ULL(NIX_RQINT_DROP)) + otx2_err("RQ=%d NIX_RQINT_DROP", rq); + + if (irq & BIT_ULL(NIX_RQINT_RED)) + otx2_err("RQ=%d NIX_RQINT_RED", rq); + } + + /* Handle CQ interrupts */ + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) { + cq = q % dev->qints; + irq = nix_lf_cq_irq_get_and_clear(dev, cq); + + if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR)) + otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL)) + otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT)) + otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq); + } + + /* Handle SQ interrupts */ + for (q = 0; q < eth_dev->data->nb_tx_queues; q++) { + sq = q % dev->qints; + irq = nix_lf_sq_irq_get_and_clear(dev, sq); + + if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) { + otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) { + otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) { + otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) { + otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG); + } + } + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); +} + +int +oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q, sqs, rqs, qs, rc = 0; + + /* Figure out max qintx required */ + rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues); + sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues); + qs = RTE_MAX(rqs, sqs); + + dev->configured_qints = qs; + + for (q = 0; q < qs; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q)); + + dev->qints_mem[q].eth_dev = eth_dev; + dev->qints_mem[q].qintx = q; + + /* Sync qints_mem update */ + rte_smp_wmb(); + + /* Register queue irq vector */ + rc = otx2_register_irq(handle, nix_lf_q_irq, + &dev->qints_mem[q], vec); + if (rc) + break; + + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +void +oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q; + + for (q = 0; q < dev->configured_qints; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q)); + + /* Unregister queue irq vector */ + otx2_unregister_irq(handle, nix_lf_q_irq, + &dev->qints_mem[q], vec); + } +} + int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev) { From patchwork Sun Jun 30 18:05:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55685 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2E1171B9CB; Sun, 30 Jun 2019 20:07:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 55F8D1B99C for ; Sun, 30 Jun 2019 20:07:03 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6oes016863; Sun, 30 Jun 2019 11:07:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=JgmYYtReqhI9LdoLmGVQBAQ0+13sL5zjiFd+GNYJtOw=; b=LGsQcT9JVz/+I3WrVHK9nesTKuwMrEmFlMEWw0KqUxDJV0hWB1trR/OjvRj70B0iq5ej Au0yDLJwq2Is3a2LCU84hVyzBo/sotiG652Jw78c+9WdeAnk0beS8GUyCBm541y8SaRu GPtYZyd7Z2sC6WJTAp4d6h6T4KfdF8d1cVR/duq/51ECLhXrmfMLEg3X07ad39CVPtrG 5pdLN7Y0U9KUli9r7ZnJf/YzIAjsnCQnlFgXgXis0wcY6rlFLBizfqIoguKMIsTrBolG 8Fsu0ENwZbQyNqt6Bsp6ErL8+XfIJrsxeRbzzNO9oDCXN+A8mqEMKOgfP9Yggh5jyJJm sQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y80-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:02 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:00 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:00 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id BC92A3F703F; Sun, 30 Jun 2019 11:06:58 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:21 +0530 Message-ID: <20190630180609.36705-10-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 09/57] net/octeontx2: add context debug utils X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add RQ,SQ,CQ context and CQE structure dump utils. Signed-off-by: Jerin Jacob Signed-off-by: Vivek Sharma --- doc/guides/nics/octeontx2.rst | 2 +- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.h | 4 + drivers/net/octeontx2/otx2_ethdev_debug.c | 272 ++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev_irq.c | 6 + 6 files changed, 285 insertions(+), 1 deletion(-) create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 50e825968..75d5746e8 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -18,7 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - SR-IOV VF - Lock-free Tx queue -- Debug utilities - error interrupt support +- Debug utilities - Context dump and error interrupt support Prerequisites ------------- diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 5083637e4..c6e24a535 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -32,6 +32,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ + otx2_ethdev_debug.c \ otx2_ethdev_devargs.c LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index aa8417e3f..a06e1192c 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -7,6 +7,7 @@ sources = files( 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', + 'otx2_ethdev_debug.c', 'otx2_ethdev_devargs.c' ) diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index d9cdd33b5..7c0bef28e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -174,6 +174,10 @@ int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); +/* Debug */ +int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev); +void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c new file mode 100644 index 000000000..39cda7637 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_debug.c @@ -0,0 +1,272 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) + +static inline void +nix_lf_sq_dump(struct nix_sq_ctx_s *ctx) +{ + nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d", + ctx->sqe_way_mask, ctx->cq); + nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->sdp_mcast, ctx->substream); + nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n", + ctx->qint_idx, ctx->ena); + + nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d", + ctx->sqb_count, ctx->default_chan); + nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d", + ctx->smq_rr_quantum, ctx->sso_ena); + nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n", + ctx->xoff, ctx->cq_ena, ctx->smq); + + nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d", + ctx->sqe_stype, ctx->sq_int_ena); + nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d", + ctx->sq_int, ctx->sqb_aura); + nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count); + + nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d", + ctx->smq_next_sq_vld, ctx->smq_pend); + nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d", + ctx->smenq_next_sqb_vld, ctx->head_offset); + nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d", + ctx->smenq_offset, ctx->tail_offset); + nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d", + ctx->smq_lso_segnum, ctx->smq_next_sq); + nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d", + ctx->mnq_dis, ctx->lmt_dis); + nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n", + ctx->cq_limit, ctx->max_sqe_size); + + nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb); + nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb); + nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb); + nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb); + nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb); + + nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d", + ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena); + nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d", + ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps); + nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d", + ctx->vfi_lso_sb, ctx->vfi_lso_sizem1); + nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total); + + nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "", + (uint64_t)ctx->scm_lso_rem); + nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_octs); + nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_pkts); +} + +static inline void +nix_lf_rq_dump(struct nix_rq_ctx_s *ctx) +{ + nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->wqe_aura, ctx->substream); + nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d", + ctx->cq, ctx->ena_wqwd); + nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d", + ctx->ipsech_ena, ctx->sso_ena); + nix_dump("W0: ena \t\t\t%d\n", ctx->ena); + + nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d", + ctx->lpb_drop_ena, ctx->spb_drop_ena); + nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d", + ctx->xqe_drop_ena, ctx->wqe_caching); + nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d", + ctx->pb_caching, ctx->sso_tt); + nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d", + ctx->sso_grp, ctx->lpb_aura); + nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura); + + nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d", + ctx->xqe_hdr_split, ctx->xqe_imm_copy); + nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d", + ctx->xqe_imm_size, ctx->later_skip); + nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d", + ctx->first_skip, ctx->lpb_sizem1); + nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d", + ctx->spb_ena, ctx->wqe_skip); + nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1); + + nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d", + ctx->spb_pool_pass, ctx->spb_pool_drop); + nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d", + ctx->spb_aura_pass, ctx->spb_aura_drop); + nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d", + ctx->wqe_pool_pass, ctx->wqe_pool_drop); + nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n", + ctx->xqe_pass, ctx->xqe_drop); + + nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d", + ctx->qint_idx, ctx->rq_int_ena); + nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d", + ctx->rq_int, ctx->lpb_pool_pass); + nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d", + ctx->lpb_pool_drop, ctx->lpb_aura_pass); + nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop); + + nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d", + ctx->flow_tagw, ctx->bad_utag); + nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n", + ctx->good_utag, ctx->ltag); + + nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs); + nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts); + nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts); +} + +static inline void +nix_lf_cq_dump(struct nix_cq_ctx_s *ctx) +{ + nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base); + + nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr); + nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d", + ctx->avg_con, ctx->cint_idx); + nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d", + ctx->cq_err, ctx->qint_idx); + nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n", + ctx->bpid, ctx->bp_ena); + + nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d", + ctx->update_time, ctx->avg_level); + nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n", + ctx->head, ctx->tail); + + nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d", + ctx->cq_err_int_ena, ctx->cq_err_int); + nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d", + ctx->qsize, ctx->caching); + nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d", + ctx->substream, ctx->ena); + nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d", + ctx->drop_ena, ctx->drop); + nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp); +} + +int +otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, q, rq = eth_dev->data->nb_rx_queues; + int sq = eth_dev->data->nb_tx_queues; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + + for (q = 0; q < rq; q++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = q; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get cq context"); + goto fail; + } + nix_dump("============== port=%d cq=%d ===============", + eth_dev->data->port_id, q); + nix_lf_cq_dump(&rsp->cq); + } + + for (q = 0; q < rq; q++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = q; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void **)&rsp); + if (rc) { + otx2_err("Failed to get rq context"); + goto fail; + } + nix_dump("============== port=%d rq=%d ===============", + eth_dev->data->port_id, q); + nix_lf_rq_dump(&rsp->rq); + } + for (q = 0; q < sq; q++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = q; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get sq context"); + goto fail; + } + nix_dump("============== port=%d sq=%d ===============", + eth_dev->data->port_id, q); + nix_lf_sq_dump(&rsp->sq); + } + +fail: + return rc; +} + +/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */ +void +otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq) +{ + const struct nix_rx_parse_s *rx = + (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1); + + nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d", + cq->tag, cq->q, cq->node, cq->cqe_type); + + nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", + rx->chan, rx->desc_sizem1); + nix_dump("W0: imm_copy \t%d\t\texpress \t%d", + rx->imm_copy, rx->express); + nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d", + rx->wqwd, rx->errlev, rx->errcode); + nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d", + rx->latype, rx->lbtype, rx->lctype); + nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d", + rx->ldtype, rx->letype, rx->lftype); + nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d", + rx->lgtype, rx->lhtype); + + nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1); + nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d", + rx->l2m, rx->l2b, rx->l3m, rx->l3b); + nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d", + rx->vtag0_valid, rx->vtag0_gone); + nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d", + rx->vtag1_valid, rx->vtag1_gone); + nix_dump("W1: pkind \t%d", rx->pkind); + nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d", + rx->vtag0_tci, rx->vtag1_tci); + + nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d", + rx->laflags, rx->lbflags, rx->lcflags); + nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d", + rx->ldflags, rx->leflags, rx->lfflags); + nix_dump("W2: lgflags \t%d\t\tlhflags \t%d", + rx->lgflags, rx->lhflags); + + nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d", + rx->eoh_ptr, rx->wqe_aura, rx->pb_aura); + nix_dump("W3: match_id \t%d", rx->match_id); + + nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d", + rx->laptr, rx->lbptr, rx->lcptr); + nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d", + rx->ldptr, rx->leptr, rx->lfptr); + nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr); + + nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d", + rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg); +} diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 476c7ea78..fdebdef38 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -23,6 +23,8 @@ nix_lf_err_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_ERR_INT); + + otx2_nix_queues_ctx_dump(eth_dev); } static int @@ -75,6 +77,8 @@ nix_lf_ras_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_RAS); + + otx2_nix_queues_ctx_dump(eth_dev); } static int @@ -232,6 +236,8 @@ nix_lf_q_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); + + otx2_nix_queues_ctx_dump(eth_dev); } int From patchwork Sun Jun 30 18:05:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55686 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D1B351B9D4; Sun, 30 Jun 2019 20:07:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 2B0A01B9C4 for ; Sun, 30 Jun 2019 20:07:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5KjJ015698; Sun, 30 Jun 2019 11:07:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=536L+2YLZuY0RHWVQxFUCmZT0JRmLcfc6D80tPOEC+w=; b=GsWcoqoLmU4hwyoUCpnTrx3qR7KENnpzAfLl9tNLsT+wiG1MkgNvjZKclYSIK4z35wTM siLgBjMGDByNyfHeM4cSfX5xu9SOSlRDMo+VN7oLRSn9bvZJH07CxaQCgbXHCcworLHN YTRV5mmxK5JQqeCepzw0r7Jq9qMQKJoro17sRhhu//XQbMjudmQukYb4aBQ2br5Lf5T6 in7f33cOcO+1bvvFReS0KUUg+Itk06M+OqTbPwpBHfSVPu0ZJ3wl7itiVdeQfrafL5oS vTEYkU6WRusSzqkphZT9quBYdkGKLnKtU3kofeAw+3fWgxn2DGOIRdEuIhhUgFXHjWwc kg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y84-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:05 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:03 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:03 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id E1E903F7040; Sun, 30 Jun 2019 11:07:01 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K Date: Sun, 30 Jun 2019 23:35:22 +0530 Message-ID: <20190630180609.36705-11-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 10/57] net/octeontx2: add register dump support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Add register dump support and mark Registers dump in features. Signed-off-by: Kiran Kumar K Signed-off-by: Jerin Jacob --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 3 + drivers/net/octeontx2/otx2_ethdev_debug.c | 228 +++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev_irq.c | 6 + 7 files changed, 241 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 356b88de7..7d53bf0e7 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -8,6 +8,7 @@ Speed capabilities = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Registers dump = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 5f4eaa3f4..e0cc7b22d 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -8,6 +8,7 @@ Speed capabilities = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Registers dump = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 024b032d4..6dfdf88c6 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -7,6 +7,7 @@ Speed capabilities = Y Lock-free Tx queue = Y Multiprocess aware = Y +Registers dump = Y Linux VFIO = Y ARMv8 = Y Usage doc = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 045855c2e..48d5a15d6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -229,6 +229,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, + .get_reg = otx2_nix_dev_get_reg, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7c0bef28e..7313689b0 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -175,6 +175,9 @@ void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); /* Debug */ +int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data); +int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, + struct rte_dev_reg_info *regs); int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev); void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c index 39cda7637..9f06e5505 100644 --- a/drivers/net/octeontx2/otx2_ethdev_debug.c +++ b/drivers/net/octeontx2/otx2_ethdev_debug.c @@ -5,6 +5,234 @@ #include "otx2_ethdev.h" #define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) +#define NIX_REG_INFO(reg) {reg, #reg} + +struct nix_lf_reg_info { + uint32_t offset; + const char *name; +}; + +static const struct +nix_lf_reg_info nix_lf_reg[] = { + NIX_REG_INFO(NIX_LF_RX_SECRETX(0)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(1)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(2)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(3)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(4)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(5)), + NIX_REG_INFO(NIX_LF_CFG), + NIX_REG_INFO(NIX_LF_GINT), + NIX_REG_INFO(NIX_LF_GINT_W1S), + NIX_REG_INFO(NIX_LF_GINT_ENA_W1C), + NIX_REG_INFO(NIX_LF_GINT_ENA_W1S), + NIX_REG_INFO(NIX_LF_ERR_INT), + NIX_REG_INFO(NIX_LF_ERR_INT_W1S), + NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C), + NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S), + NIX_REG_INFO(NIX_LF_RAS), + NIX_REG_INFO(NIX_LF_RAS_W1S), + NIX_REG_INFO(NIX_LF_RAS_ENA_W1C), + NIX_REG_INFO(NIX_LF_RAS_ENA_W1S), + NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG), + NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG), + NIX_REG_INFO(NIX_LF_SEND_ERR_DBG), +}; + +static int +nix_lf_get_reg_count(struct otx2_eth_dev *dev) +{ + int reg_count = 0; + + reg_count = RTE_DIM(nix_lf_reg); + /* NIX_LF_TX_STATX */ + reg_count += dev->lf_tx_stats; + /* NIX_LF_RX_STATX */ + reg_count += dev->lf_rx_stats; + /* NIX_LF_QINTX_CNT*/ + reg_count += dev->qints; + /* NIX_LF_QINTX_INT */ + reg_count += dev->qints; + /* NIX_LF_QINTX_ENA_W1S */ + reg_count += dev->qints; + /* NIX_LF_QINTX_ENA_W1C */ + reg_count += dev->qints; + /* NIX_LF_CINTX_CNT */ + reg_count += dev->cints; + /* NIX_LF_CINTX_WAIT */ + reg_count += dev->cints; + /* NIX_LF_CINTX_INT */ + reg_count += dev->cints; + /* NIX_LF_CINTX_INT_W1S */ + reg_count += dev->cints; + /* NIX_LF_CINTX_ENA_W1S */ + reg_count += dev->cints; + /* NIX_LF_CINTX_ENA_W1C */ + reg_count += dev->cints; + + return reg_count; +} + +int +otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data) +{ + uintptr_t nix_lf_base = dev->base; + bool dump_stdout; + uint64_t reg; + uint32_t i; + + dump_stdout = data ? 0 : 1; + + for (i = 0; i < RTE_DIM(nix_lf_reg); i++) { + reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset); + if (dump_stdout && reg) + nix_dump("%32s = 0x%" PRIx64, + nix_lf_reg[i].name, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_TX_STATX */ + for (i = 0; i < dev->lf_tx_stats; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_TX_STATX", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_RX_STATX */ + for (i = 0; i < dev->lf_rx_stats; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_RX_STATX", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_CNT*/ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_CNT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_INT */ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_INT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_ENA_W1S */ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_ENA_W1S", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_ENA_W1C */ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_ENA_W1C", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_CNT */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_CNT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_WAIT */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_WAIT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_INT */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_INT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_INT_W1S */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_INT_W1S", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_ENA_W1S */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_ENA_W1S", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_ENA_W1C */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_ENA_W1C", i, reg); + if (data) + *data++ = reg; + } + return 0; +} + +int +otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t *data = regs->data; + + if (data == NULL) { + regs->length = nix_lf_get_reg_count(dev); + regs->width = 8; + return 0; + } + + if (!regs->length || + regs->length == (uint32_t)nix_lf_get_reg_count(dev)) { + otx2_nix_reg_dump(dev, data); + return 0; + } + + return -ENOTSUP; +} static inline void nix_lf_sq_dump(struct nix_sq_ctx_s *ctx) diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index fdebdef38..066aca7a5 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -24,6 +24,8 @@ nix_lf_err_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_ERR_INT); + /* Dump registers to std out */ + otx2_nix_reg_dump(dev, NULL); otx2_nix_queues_ctx_dump(eth_dev); } @@ -78,6 +80,8 @@ nix_lf_ras_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_RAS); + /* Dump registers to std out */ + otx2_nix_reg_dump(dev, NULL); otx2_nix_queues_ctx_dump(eth_dev); } @@ -237,6 +241,8 @@ nix_lf_q_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); + /* Dump registers to std out */ + otx2_nix_reg_dump(dev, NULL); otx2_nix_queues_ctx_dump(eth_dev); } From patchwork Sun Jun 30 18:05:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55687 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F3E91B9D8; Sun, 30 Jun 2019 20:07:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 31CFA1B999 for ; Sun, 30 Jun 2019 20:07:09 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI74rD016918; Sun, 30 Jun 2019 11:07:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=rHT9NggsDr7DrB5R9xPgTli3uQzBKMOu6Rzcg0aCTL8=; b=sP2lKIwtn4/L19AoChww6BToRCB5XPAgYDaWQpxDwSP+40BAKbGrFIMzHQqj0R6RiC4r B4vo3z8sCoQ20lQfhk1XvBbhBUO8kWVUwBgzq3W3o9QINKVNgrEGsRYKWPR7t/U8zdD6 L3Fe6L/77jr9N/OiEK1+3wn7h3A9OVedeF1bTYe9nvXUSSyHO7kossmHub2OG/NjlVWB DqVAQFDyPFAo1xrBITO30XfxWvvM4L1Ze81QbBYnINemFPpAVtItwLTSdbQ4kQbRWdRV Hx3covy7bij2VnsInPSSKY2/oKpxNSyC8VyWWqa8hQ+U7UPRLghPTbIC+pkfJAeQmN/d tw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y8d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:08 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:06 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CDDC13F7040; Sun, 30 Jun 2019 11:07:04 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:23 +0530 Message-ID: <20190630180609.36705-12-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 11/57] net/octeontx2: add link stats operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add link stats related operations and mark respective items in the documentation. Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 8 ++ drivers/net/octeontx2/otx2_ethdev.h | 8 ++ drivers/net/octeontx2/otx2_link.c | 108 +++++++++++++++++++++ 9 files changed, 133 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_link.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 7d53bf0e7..828351409 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -8,6 +8,8 @@ Speed capabilities = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Link status = Y +Link status event = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index e0cc7b22d..719692dc6 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -8,6 +8,8 @@ Speed capabilities = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Link status = Y +Link status event = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 6dfdf88c6..4d5667583 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -7,6 +7,8 @@ Speed capabilities = Y Lock-free Tx queue = Y Multiprocess aware = Y +Link status = Y +Link status event = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 75d5746e8..a163f9128 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - SR-IOV VF - Lock-free Tx queue +- Link state information - Debug utilities - Context dump and error interrupt support Prerequisites diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index c6e24a535..2dfb5043d 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -29,6 +29,7 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ + otx2_link.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index a06e1192c..d693386b9 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -4,6 +4,7 @@ sources = files( 'otx2_mac.c', + 'otx2_link.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 48d5a15d6..cb4f6ebb9 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -39,6 +39,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev) return NIX_TX_OFFLOAD_CAPA; } +static const struct otx2_dev_ops otx2_dev_ops = { + .link_status_update = otx2_eth_dev_link_status_update, +}; + static int nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq) { @@ -229,6 +233,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, + .link_update = otx2_nix_link_update, .get_reg = otx2_nix_dev_get_reg, }; @@ -324,6 +329,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) goto error; } } + /* Device generic callbacks */ + dev->ops = &otx2_dev_ops; + dev->eth_dev = eth_dev; /* Grab the NPA LF if required */ rc = otx2_npa_lf_init(pci_dev, dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7313689b0..d8490337d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -136,6 +136,7 @@ struct otx2_eth_dev { uint8_t max_mac_entries; uint8_t lf_tx_stats; uint8_t lf_rx_stats; + uint16_t flags; uint16_t cints; uint16_t qints; uint8_t configured; @@ -156,6 +157,7 @@ struct otx2_eth_dev { struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; struct otx2_npc_flow_info npc_flow; + struct rte_eth_dev *eth_dev; } __rte_cache_aligned; static inline struct otx2_eth_dev * @@ -168,6 +170,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +/* Link */ +void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); +int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); +void otx2_eth_dev_link_status_update(struct otx2_dev *dev, + struct cgx_link_user_info *link); + /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c new file mode 100644 index 000000000..228a0cd8e --- /dev/null +++ b/drivers/net/octeontx2/otx2_link.c @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" + +void +otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set) +{ + if (set) + dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F; + else + dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F; + + rte_wmb(); +} + +static inline int +nix_wait_for_link_cfg(struct otx2_eth_dev *dev) +{ + uint16_t wait = 1000; + + do { + rte_rmb(); + if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F)) + break; + wait--; + rte_delay_ms(1); + } while (wait); + + return wait ? 0 : -1; +} + +static void +nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link) +{ + if (link && link->link_status) + otx2_info("Port %d: Link Up - speed %u Mbps - %s", + (int)(eth_dev->data->port_id), + (uint32_t)link->link_speed, + link->link_duplex == ETH_LINK_FULL_DUPLEX ? + "full-duplex" : "half-duplex"); + else + otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id)); +} + +void +otx2_eth_dev_link_status_update(struct otx2_dev *dev, + struct cgx_link_user_info *link) +{ + struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev; + struct rte_eth_dev *eth_dev = otx2_dev->eth_dev; + struct rte_eth_link eth_link; + + if (!link || !dev || !eth_dev->data->dev_conf.intr_conf.lsc) + return; + + if (nix_wait_for_link_cfg(otx2_dev)) { + otx2_err("Timeout waiting for link_cfg to complete"); + return; + } + + eth_link.link_status = link->link_up; + eth_link.link_speed = link->speed; + eth_link.link_autoneg = ETH_LINK_AUTONEG; + eth_link.link_duplex = link->full_duplex; + + /* Print link info */ + nix_link_status_print(eth_dev, ð_link); + + /* Update link info */ + rte_eth_linkstatus_set(eth_dev, ð_link); + + /* Set the flag and execute application callbacks */ + _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL); +} + +int +otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_link_info_msg *rsp; + struct rte_eth_link link; + int rc; + + RTE_SET_USED(wait_to_complete); + + if (otx2_dev_is_lbk(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + link.link_status = rsp->link_info.link_up; + link.link_speed = rsp->link_info.speed; + link.link_autoneg = ETH_LINK_AUTONEG; + + if (rsp->link_info.full_duplex) + link.link_duplex = rsp->link_info.full_duplex; + + return rte_eth_linkstatus_set(eth_dev, &link); +} From patchwork Sun Jun 30 18:05:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55688 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BCB2C1B9DC; Sun, 30 Jun 2019 20:07:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id ECFF51B9B1 for ; Sun, 30 Jun 2019 20:07:11 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5DtM027158; Sun, 30 Jun 2019 11:07:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=v3eu+ldOmmjybSkuxDEFd5dynod2eRAQyOH1vrr7cqU=; b=HlpmSj00K6lTqhYsneorUSL6ve2pZUp62atUpm/g9W9FQ4YdhboAZBvfwmHK3jCXPtbV jo1DIBDSzkkOH6m1UIzMEp1oc1ZoX0o2vS+GYAaU01SNLbMegGv+Nldhqlb5+cbbIzio cSqrtOQuSy2KFScxZ5/iKEYCa2Gr7oz66Bj3y/ElvKr1MH4JGV7VSmHWP93I232j3+Qv R2lv0JAHNa2HYyrG9SvZf3ufCm0TrF4+g9VXgGYV73WCPjWfyzylXNcqIVPWeszBkahu WdK8Cct6Rhyg1K7E8JtTZqx4pNSK6lcZOGi31jeqRcynNN9W+QWeh+jVVnYmAzEtZQce gQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gfb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:11 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:10 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:10 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id F182E3F7041; Sun, 30 Jun 2019 11:07:07 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:24 +0530 Message-ID: <20190630180609.36705-13-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 12/57] net/octeontx2: add basic stats operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Add basic stat operation and updated the feature list. Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 3 + drivers/net/octeontx2/otx2_ethdev.h | 17 +++ drivers/net/octeontx2/otx2_stats.c | 117 +++++++++++++++++++++ 9 files changed, 146 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_stats.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 828351409..557107016 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -10,6 +10,8 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Basic stats = Y +Stats per queue = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 719692dc6..3a2b78e06 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -10,6 +10,8 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Basic stats = Y +Stats per queue = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 4d5667583..499f66c5c 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -9,6 +9,8 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +Basic stats = Y +Stats per queue = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index a163f9128..2944bbb99 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - SR-IOV VF - Lock-free Tx queue +- Port hardware statistics - Link state information - Debug utilities - Context dump and error interrupt support diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 2dfb5043d..fbe5e9f44 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_link.c \ + otx2_stats.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index d693386b9..1c57b1bb4 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files( 'otx2_mac.c', 'otx2_link.c', + 'otx2_stats.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index cb4f6ebb9..5787029d9 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -234,7 +234,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, .link_update = otx2_nix_link_update, + .stats_get = otx2_nix_dev_stats_get, + .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index d8490337d..1cd9893a6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -77,6 +77,12 @@ #define NIX_TX_NB_SEG_MAX 9 #endif +#define CQ_OP_STAT_OP_ERR 63 +#define CQ_OP_STAT_CQ_ERR 46 + +#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR) +#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR) + #define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\ ETH_RSS_TCP | ETH_RSS_SCTP | \ ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD) @@ -156,6 +162,8 @@ struct otx2_eth_dev { uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; + uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; + uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; struct rte_eth_dev *eth_dev; } __rte_cache_aligned; @@ -189,6 +197,15 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev); void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); +/* Stats */ +int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *stats); +void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev); + +int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev, + uint16_t queue_id, uint8_t stat_idx, + uint8_t is_rx); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c new file mode 100644 index 000000000..ade0f6ad6 --- /dev/null +++ b/drivers/net/octeontx2/otx2_stats.c @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" + +int +otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *stats) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t reg, val; + uint32_t qidx, i; + int64_t *addr; + + stats->opackets = otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST)); + stats->opackets += otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST)); + stats->opackets += otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST)); + stats->oerrors = otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP)); + stats->obytes = otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS)); + + stats->ipackets = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST)); + stats->ipackets += otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST)); + stats->ipackets += otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST)); + stats->imissed = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP)); + stats->ibytes = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS)); + stats->ierrors = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR)); + + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + if (dev->txmap[i] & (1 << 31)) { + qidx = dev->txmap[i] & 0xFFFF; + reg = (((uint64_t)qidx) << 32); + + addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_ipackets[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_ibytes[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_errors[i] = val; + } + } + + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + if (dev->rxmap[i] & (1 << 31)) { + qidx = dev->rxmap[i] & 0xFFFF; + reg = (((uint64_t)qidx) << 32); + + addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_opackets[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_obytes[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_errors[i] += val; + } + } + + return 0; +} + +void +otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_stats_rst(mbox); + otx2_mbox_process(mbox); +} + +int +otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id, + uint8_t stat_idx, uint8_t is_rx) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + if (is_rx) + dev->rxmap[stat_idx] = ((1 << 31) | queue_id); + else + dev->txmap[stat_idx] = ((1 << 31) | queue_id); + + return 0; +} From patchwork Sun Jun 30 18:05:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55689 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2606F1B9E1; Sun, 30 Jun 2019 20:07:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4D57B1B999 for ; Sun, 30 Jun 2019 20:07:15 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI72j6028445; Sun, 30 Jun 2019 11:07:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=MYpHe76kTMmJBk5/70Ogc1u7eCTX+5yvI3rbO0mDjVo=; b=ABc55kUTkgbU7x3HE6c2y3iu1QJbfsp34XP2mg68dkEVhmluFwEOT2YJ1EoYAiqesd3q fG0tGTBiaxJ4EXl9yFb6Nbi1ML16TlrGWFk+xj1H39anWgvkdUC6RIcWh5NxGrxb5Re3 KD51Gyben+9wNaMRFb+X9/XqQ1syoeF2hmHYgP9IHtjOgMlWY0/mKPIJvNL1eUkKnupL HQIKRnNZkmlKRkdqq4TUh0+2ny+oV0GCV/oKKBfo9wBBtBXsYXjY2gnNY3v5tcRJ6Tfx lVA8TrU6FUtyDuOaP10ue00+gEpVOCgSd8zbklPRbgfR8x0XBbqg64aiMBiYz8WOyFP/ Cw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gfh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:14 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:13 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:13 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 0D84D3F703F; Sun, 30 Jun 2019 11:07:10 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:25 +0530 Message-ID: <20190630180609.36705-14-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 13/57] net/octeontx2: add extended stats operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Add extended operations and updated the feature list. Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 5 + drivers/net/octeontx2/otx2_ethdev.h | 13 + drivers/net/octeontx2/otx2_stats.c | 270 +++++++++++++++++++++ 6 files changed, 291 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 557107016..8d7c3588c 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Basic stats = Y Stats per queue = Y +Extended stats = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 3a2b78e06..a6e6876fa 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -11,6 +11,7 @@ Multiprocess aware = Y Link status = Y Link status event = Y Basic stats = Y +Extended stats = Y Stats per queue = Y Registers dump = Y Linux VFIO = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 499f66c5c..6ec83e823 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -10,6 +10,7 @@ Multiprocess aware = Y Link status = Y Link status event = Y Basic stats = Y +Extended stats = Y Stats per queue = Y Registers dump = Y Linux VFIO = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 5787029d9..937ba6399 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -238,6 +238,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, + .xstats_get = otx2_nix_xstats_get, + .xstats_get_names = otx2_nix_xstats_get_names, + .xstats_reset = otx2_nix_xstats_reset, + .xstats_get_by_id = otx2_nix_xstats_get_by_id, + .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 1cd9893a6..7d53a6643 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -205,6 +205,19 @@ void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev); int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev, uint16_t queue_id, uint8_t stat_idx, uint8_t is_rx); +int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat *xstats, unsigned int n); +int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit); +void otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev); + +int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, + const uint64_t *ids, + uint64_t *values, unsigned int n); +int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, unsigned int limit); /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c index ade0f6ad6..deb83b704 100644 --- a/drivers/net/octeontx2/otx2_stats.c +++ b/drivers/net/octeontx2/otx2_stats.c @@ -6,6 +6,45 @@ #include "otx2_ethdev.h" +struct otx2_nix_xstats_name { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + uint32_t offset; +}; + +static const struct otx2_nix_xstats_name nix_tx_xstats[] = { + {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST}, + {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST}, + {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST}, + {"tx_drop", NIX_STAT_LF_TX_TX_DROP}, + {"tx_octs", NIX_STAT_LF_TX_TX_OCTS}, +}; + +static const struct otx2_nix_xstats_name nix_rx_xstats[] = { + {"rx_octs", NIX_STAT_LF_RX_RX_OCTS}, + {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST}, + {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST}, + {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST}, + {"rx_drop", NIX_STAT_LF_RX_RX_DROP}, + {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS}, + {"rx_fcs", NIX_STAT_LF_RX_RX_FCS}, + {"rx_err", NIX_STAT_LF_RX_RX_ERR}, + {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST}, + {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST}, + {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST}, + {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST}, +}; + +static const struct otx2_nix_xstats_name nix_q_xstats[] = { + {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS}, +}; + +#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats) +#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats) +#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats) + +#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \ + OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS) + int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats) @@ -115,3 +154,234 @@ otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id, return 0; } + +int +otx2_nix_xstats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat *xstats, + unsigned int n) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + unsigned int i, count = 0; + uint64_t reg, val; + + if (n < OTX2_NIX_NUM_XSTATS_REG) + return OTX2_NIX_NUM_XSTATS_REG; + + if (xstats == NULL) + return 0; + + for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) { + xstats[count].value = otx2_read64(dev->base + + NIX_LF_TX_STATX(nix_tx_xstats[i].offset)); + xstats[count].id = count; + count++; + } + + for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) { + xstats[count].value = otx2_read64(dev->base + + NIX_LF_RX_STATX(nix_rx_xstats[i].offset)); + xstats[count].id = count; + count++; + } + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + reg = (((uint64_t)i) << 32); + val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base + + nix_q_xstats[0].offset)); + if (val & OP_ERR) + val = 0; + xstats[count].value += val; + } + xstats[count].id = count; + count++; + + return count; +} + +int +otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) +{ + unsigned int i, count = 0; + + RTE_SET_USED(eth_dev); + + if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL) + return -ENOMEM; + + if (xstats_names) { + for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "%s", nix_tx_xstats[i].name); + count++; + } + + for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "%s", nix_rx_xstats[i].name); + count++; + } + + for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "%s", nix_q_xstats[i].name); + count++; + } + } + + return OTX2_NIX_NUM_XSTATS_REG; +} + +int +otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, unsigned int limit) +{ + struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG]; + uint16_t i; + + if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL) + return OTX2_NIX_NUM_XSTATS_REG; + + if (limit > OTX2_NIX_NUM_XSTATS_REG) + return -EINVAL; + + if (xstats_names == NULL) + return -ENOMEM; + + otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit); + + for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) { + if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) { + otx2_err("Invalid id value"); + return -EINVAL; + } + strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name, + sizeof(xstats_names[i].name)); + } + + return limit; +} + +int +otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids, + uint64_t *values, unsigned int n) +{ + struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG]; + uint16_t i; + + if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL) + return OTX2_NIX_NUM_XSTATS_REG; + + if (n > OTX2_NIX_NUM_XSTATS_REG) + return -EINVAL; + + if (values == NULL) + return -ENOMEM; + + otx2_nix_xstats_get(eth_dev, xstats, n); + + for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) { + if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) { + otx2_err("Invalid id value"); + return -EINVAL; + } + values[i] = xstats[ids[i]].value; + } + + return n; +} + +static void +nix_queue_stats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + uint32_t i; + int rc; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to read rq context"); + return; + } + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq)); + otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask)); + aq->rq.octs = 0; + aq->rq.pkts = 0; + aq->rq.drop_octs = 0; + aq->rq.drop_pkts = 0; + aq->rq.re_pkts = 0; + + aq->rq_mask.octs = ~(aq->rq_mask.octs); + aq->rq_mask.pkts = ~(aq->rq_mask.pkts); + aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); + aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts); + aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to write rq context"); + return; + } + } + + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to read sq context"); + return; + } + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq)); + otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask)); + aq->sq.octs = 0; + aq->sq.pkts = 0; + aq->sq.drop_octs = 0; + aq->sq.drop_pkts = 0; + + aq->sq_mask.octs = ~(aq->sq_mask.octs); + aq->sq_mask.pkts = ~(aq->sq_mask.pkts); + aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); + aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to write sq context"); + return; + } + } +} + +void +otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_stats_rst(mbox); + otx2_mbox_process(mbox); + + /* Reset queue stats */ + nix_queue_stats_reset(eth_dev); +} From patchwork Sun Jun 30 18:05:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55690 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0D0361B9E6; Sun, 30 Jun 2019 20:07:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 133FD1B99E for ; Sun, 30 Jun 2019 20:07:18 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5cCI016218; Sun, 30 Jun 2019 11:07:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=z2Eoy+eh5dUagMl0WZ54qjoj+u//Yx+Xz28s5WyLc8I=; b=GYRT3P+RHZapHvaOKQBneSzFH2Mf+YXgb8g4WFRny/5u86dy6oNByhJl4BhVzoNf78pK MM2gWEOaNtWWVdWi7Qi1zaqgGqxYJQyaNYdZ04DfCDp+DnlZfiRYqfinWL3ESuPu7CDC fAN+Uw1XiYDcn5YPbp3myIJW9QXaSeN6FtU/BayZV0OUj2g1G5l2k7VEgIu/mx1e4dvw mq19KFGvV8Tl/9SiJkj5/+kHmrpfhcYRiX03lwVe/DVAMGe83E2t+mJCTp70CNs4PYJH ESjtPJpieIi8T4EnlkYqJp2X3hxO9C3F+hBrKKjmdxHpoomTyjAStSZ3ctmj+yEZ/WGi Xg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y8u-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:18 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:16 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:16 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 25E3B3F703F; Sun, 30 Jun 2019 11:07:13 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru , Sunil Kumar Kori Date: Sun, 30 Jun 2019 23:35:26 +0530 Message-ID: <20190630180609.36705-15-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 14/57] net/octeontx2: add promiscuous and allmulticast mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add promiscuous and allmulticast mode for PF devices and update the respective feature list. Signed-off-by: Vamsi Attunuru Signed-off-by: Sunil Kumar Kori --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 4 ++ drivers/net/octeontx2/otx2_ethdev.h | 6 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 82 ++++++++++++++++++++++ 6 files changed, 97 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 8d7c3588c..9f682609d 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -10,6 +10,8 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Promiscuous mode = Y +Allmulticast mode = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index a6e6876fa..764e95ce6 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -10,6 +10,8 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Promiscuous mode = Y +Allmulticast mode = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 2944bbb99..9ef7be08f 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -16,6 +16,7 @@ Features Features of the OCTEON TX2 Ethdev PMD are: +- Promiscuous mode - SR-IOV VF - Lock-free Tx queue - Port hardware statistics diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 937ba6399..826ce7f4e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -237,6 +237,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .promiscuous_enable = otx2_nix_promisc_enable, + .promiscuous_disable = otx2_nix_promisc_disable, + .allmulticast_enable = otx2_nix_allmulticast_enable, + .allmulticast_disable = otx2_nix_allmulticast_disable, .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, .xstats_get = otx2_nix_xstats_get, .xstats_get_names = otx2_nix_xstats_get_names, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7d53a6643..814fd6ec3 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -178,6 +178,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en); +void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); +void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev); +void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); +void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); + /* Link */ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index df7e909d2..301a597f8 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -4,6 +4,88 @@ #include "otx2_ethdev.h" +static void +nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return; + + if (en) + otx2_mbox_alloc_msg_cgx_promisc_enable(mbox); + else + otx2_mbox_alloc_msg_cgx_promisc_disable(mbox); + + otx2_mbox_process(mbox); +} + +void +otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_rx_mode *req; + + if (otx2_dev_is_vf(dev)) + return; + + req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox); + + if (en) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC; + + otx2_mbox_process(mbox); + eth_dev->data->promiscuous = en; +} + +void +otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev) +{ + otx2_nix_promisc_config(eth_dev, 1); + nix_cgx_promisc_config(eth_dev, 1); +} + +void +otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev) +{ + otx2_nix_promisc_config(eth_dev, 0); + nix_cgx_promisc_config(eth_dev, 0); +} + +static void +nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_rx_mode *req; + + if (otx2_dev_is_vf(dev)) + return; + + req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox); + + if (en) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI; + else if (eth_dev->data->promiscuous) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC; + + otx2_mbox_process(mbox); +} + +void +otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev) +{ + nix_allmulticast_config(eth_dev, 1); +} + +void +otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev) +{ + nix_allmulticast_config(eth_dev, 0); +} + void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) { From patchwork Sun Jun 30 18:05:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55691 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D82F91B9DB; Sun, 30 Jun 2019 20:07:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 93A3B1B9AD for ; Sun, 30 Jun 2019 20:07:21 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI72j7028445; Sun, 30 Jun 2019 11:07:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=cQUMcPXaWQfSjmz95cHZukKkGAnKdrS764Rb1yWUsY4=; b=mtKIWXk4RtQfaH1/DyMVaSrQz57UraKzqppmnrP5YEi9TXVZZxnkvsw0JFQa26go13XX aV2ppDsEBWZItz6T+Pya/CRAwuh8ING5MVN4IkpSL54nDje+umtv8R7WIC2qPAURhSTn lzrrrw4R4Dl+tP7+39mCMP4oFgx0DDq5p745qkw5xA3iP0w2BkdCYwDd8ILSDkOy+yPV oa2M2rWpjs30N6dDI/X5Rdy8EM2Q5PNZcsFTVqfHVwUZt3PGG9lrg+cUqZz5i52SpJXF CDOQtDMJzUjU8HaVBFdp6GOIau90aiIPPCY1TCJh8NKNRLUIfy34v6Z+2kqBHsSgL1kj iA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gft-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:20 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:19 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:19 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 729153F703F; Sun, 30 Jun 2019 11:07:17 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Sunil Kumar Kori , Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:27 +0530 Message-ID: <20190630180609.36705-16-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 15/57] net/octeontx2: add unicast MAC filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add unicast MAC filter for PF device and update the respective feature list. Signed-off-by: Sunil Kumar Kori Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 3 + drivers/net/octeontx2/otx2_ethdev.h | 6 ++ drivers/net/octeontx2/otx2_mac.c | 77 ++++++++++++++++++++++ 6 files changed, 89 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 9f682609d..566496113 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Promiscuous mode = Y Allmulticast mode = Y +Unicast MAC filter = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 764e95ce6..195a48940 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Promiscuous mode = Y Allmulticast mode = Y +Unicast MAC filter = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 9ef7be08f..8385c9c18 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Promiscuous mode - SR-IOV VF - Lock-free Tx queue +- MAC filtering - Port hardware statistics - Link state information - Debug utilities - Context dump and error interrupt support diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 826ce7f4e..a72c901f4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -237,6 +237,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .mac_addr_add = otx2_nix_mac_addr_add, + .mac_addr_remove = otx2_nix_mac_addr_del, + .mac_addr_set = otx2_nix_mac_addr_set, .promiscuous_enable = otx2_nix_promisc_enable, .promiscuous_disable = otx2_nix_promisc_disable, .allmulticast_enable = otx2_nix_allmulticast_enable, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 814fd6ec3..56517845b 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -232,7 +232,13 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); /* Mac address handling */ +int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, + struct rte_ether_addr *addr); int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr); +int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, + struct rte_ether_addr *addr, + uint32_t index, uint32_t pool); +void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index); int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); /* Devargs */ diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c index 89b0ca6b0..b4bcc61f8 100644 --- a/drivers/net/octeontx2/otx2_mac.c +++ b/drivers/net/octeontx2/otx2_mac.c @@ -49,6 +49,83 @@ otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev) return rsp->max_dmac_filters; } +int +otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr, + uint32_t index __rte_unused, uint32_t pool __rte_unused) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_mac_addr_add_req *req; + struct cgx_mac_addr_add_rsp *rsp; + int rc; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + if (otx2_dev_active_vfs(dev)) + return -ENOTSUP; + + req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox); + otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to add mac address, rc=%d", rc); + goto done; + } + + /* Enable promiscuous mode at NIX level */ + otx2_nix_promisc_config(eth_dev, 1); + +done: + return rc; +} + +void +otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_mac_addr_del_req *req; + int rc; + + if (otx2_dev_is_vf(dev)) + return; + + req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox); + req->index = index; + + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Failed to delete mac address, rc=%d", rc); +} + +int +otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_set_mac_addr *req; + int rc; + + req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox); + otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to set mac address, rc=%d", rc); + goto done; + } + + otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + /* Install the same entry into CGX DMAC filter table too. */ + otx2_cgx_mac_addr_set(eth_dev, addr); + +done: + return rc; +} + int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr) { From patchwork Sun Jun 30 18:05:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55692 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6D34D1B9B2; Sun, 30 Jun 2019 20:08:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BA3851B9C5 for ; Sun, 30 Jun 2019 20:07:25 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI640r016301; Sun, 30 Jun 2019 11:07:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7Uz/gZFUe3eSZAu2jkEV/Cb9UdnOl7T56SUBTQ3Y170=; b=khniooIQpT20ue0GtWw/UvBmX+nMczYfyFtjIVlCByLiZaitwzDzmM3YXGCud/nU26B+ JJFuf3tve36cCoQgiCbsTcpvyJ2uXmgpyOVImHrNgh16NopGWjUyjWRRd2MvGaLpP9Pr LcXxOXV6IyyqfGiRomvyM0h7WG3B2J6v9I/8fpjcV4vhl8WzHznIJ4XwEMS8FiYTKhEE 63IX5rJ28VP7I0hhLYjyNdWWq27hVLry7x+hBqV4cCB4+YnqElMBM7BbAnSNEAGwIF7l uxESU+UB6u96FiHg9ssC8bidzG/z4+8MGMhlsYXbtLr+nOEPO1r9/H4+uWg++Z9vgDsw TQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y97-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:24 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:22 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:22 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B5F033F703F; Sun, 30 Jun 2019 11:07:20 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:28 +0530 Message-ID: <20190630180609.36705-17-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 16/57] net/octeontx2: add RSS support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add RSS support and expose RSS related functions to implement RSS action for rte_flow driver. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- doc/guides/nics/features/octeontx2.ini | 4 + doc/guides/nics/features/octeontx2_vec.ini | 4 + doc/guides/nics/features/octeontx2_vf.ini | 4 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 11 + drivers/net/octeontx2/otx2_ethdev.h | 33 ++ drivers/net/octeontx2/otx2_rss.c | 372 +++++++++++++++++++++ 9 files changed, 431 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_rss.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 566496113..f2d47d57b 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -13,6 +13,10 @@ Link status event = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 195a48940..a67353d2a 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -13,6 +13,10 @@ Link status event = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 6ec83e823..97d66ddde 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -9,6 +9,10 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 8385c9c18..3bee3f3ca 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Promiscuous mode - SR-IOV VF - Lock-free Tx queue +- Receiver Side Scaling (RSS) - MAC filtering - Port hardware statistics - Link state information diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index fbe5e9f44..f9f9ae6e6 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -28,6 +28,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_rss.c \ otx2_mac.c \ otx2_link.c \ otx2_stats.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 1c57b1bb4..8681a2642 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_rss.c', 'otx2_mac.c', 'otx2_link.c', 'otx2_stats.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index a72c901f4..5289c79e8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -195,6 +195,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto fail; } + /* Configure RSS */ + rc = otx2_nix_rss_config(eth_dev); + if (rc) { + otx2_err("Failed to configure rss rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -245,6 +252,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .allmulticast_enable = otx2_nix_allmulticast_enable, .allmulticast_disable = otx2_nix_allmulticast_disable, .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, + .reta_update = otx2_nix_dev_reta_update, + .reta_query = otx2_nix_dev_reta_query, + .rss_hash_update = otx2_nix_rss_hash_update, + .rss_hash_conf_get = otx2_nix_rss_hash_conf_get, .xstats_get = otx2_nix_xstats_get, .xstats_get_names = otx2_nix_xstats_get_names, .xstats_reset = otx2_nix_xstats_reset, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 56517845b..19a4e45b0 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -59,6 +59,7 @@ #define NIX_MAX_SQB 512 #define NIX_MIN_SQB 32 +#define NIX_RSS_RETA_SIZE_MAX 256 /* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/ #define NIX_RSS_GRPS 8 #define NIX_HASH_KEY_SIZE 48 /* 352 Bits */ @@ -112,14 +113,22 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +#define NIX_DEFAULT_RSS_CTX_GROUP 0 +#define NIX_DEFAULT_RSS_MCAM_IDX -1 + struct otx2_qint { struct rte_eth_dev *eth_dev; uint8_t qintx; }; struct otx2_rss_info { + uint64_t nix_rss; + uint32_t flowkey_cfg; uint16_t rss_size; uint8_t rss_grps; + uint8_t alg_idx; /* Selected algo index */ + uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX]; + uint8_t key[NIX_HASH_KEY_SIZE]; }; struct otx2_npc_flow_info { @@ -225,6 +234,30 @@ int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, struct rte_eth_xstat_name *xstats_names, const uint64_t *ids, unsigned int limit); +/* RSS */ +void otx2_nix_rss_set_key(struct otx2_eth_dev *dev, + uint8_t *key, uint32_t key_len); +uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, + uint64_t ethdev_rss, uint8_t rss_level); +int otx2_rss_set_hf(struct otx2_eth_dev *dev, + uint32_t flowkey_cfg, uint8_t *alg_idx, + uint8_t group, int mcam_index); +int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group, + uint16_t *ind_tbl); +int otx2_nix_rss_config(struct rte_eth_dev *eth_dev); + +int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf); + +int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c new file mode 100644 index 000000000..5afa21490 --- /dev/null +++ b/drivers/net/octeontx2/otx2_rss.c @@ -0,0 +1,372 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +int +otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, + uint8_t group, uint16_t *ind_tbl) +{ + struct otx2_rss_info *rss = &dev->rss_info; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *req; + int rc, idx; + + for (idx = 0; idx < rss->rss_size; idx++) { + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) + return -ENOMEM; + } + req->rss.rq = ind_tbl[idx]; + /* Fill AQ info */ + req->qidx = (group * rss->rss_size) + idx; + req->ctype = NIX_AQ_CTYPE_RSS; + req->op = NIX_AQ_INSTOP_INIT; + } + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + return 0; +} + +int +otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_rss_info *rss = &dev->rss_info; + int rc, i, j; + int idx = 0; + + rc = -EINVAL; + if (reta_size != dev->rss_info.rss_size) { + otx2_err("Size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, dev->rss_info.rss_size); + goto fail; + } + + /* Copy RETA table */ + for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) { + for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) { + if ((reta_conf[i].mask >> j) & 0x01) + rss->ind_tbl[idx] = reta_conf[i].reta[j]; + idx++; + } + } + + return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl); + +fail: + return rc; +} + +int +otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_rss_info *rss = &dev->rss_info; + int rc, i, j; + + rc = -EINVAL; + + if (reta_size != dev->rss_info.rss_size) { + otx2_err("Size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, dev->rss_info.rss_size); + goto fail; + } + + /* Copy RETA table */ + for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) { + for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) + if ((reta_conf[i].mask >> j) & 0x01) + reta_conf[i].reta[j] = rss->ind_tbl[j]; + } + + return 0; + +fail: + return rc; +} + +void +otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key, + uint32_t key_len) +{ + const uint8_t default_key[NIX_HASH_KEY_SIZE] = { + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD + }; + struct otx2_rss_info *rss = &dev->rss_info; + uint64_t *keyptr; + uint64_t val; + uint32_t idx; + + if (key == NULL || key == 0) { + keyptr = (uint64_t *)(uintptr_t)default_key; + key_len = NIX_HASH_KEY_SIZE; + memset(rss->key, 0, key_len); + } else { + memcpy(rss->key, key, key_len); + keyptr = (uint64_t *)rss->key; + } + + for (idx = 0; idx < (key_len >> 3); idx++) { + val = rte_cpu_to_be_64(*keyptr); + otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx)); + keyptr++; + } +} + +static void +rss_get_key(struct otx2_eth_dev *dev, uint8_t *key) +{ + uint64_t *keyptr = (uint64_t *)key; + uint64_t val; + int idx; + + for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) { + val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx)); + *keyptr = rte_be_to_cpu_64(val); + keyptr++; + } +} + +#define RSS_IPV4_ENABLE ( \ + ETH_RSS_IPV4 | \ + ETH_RSS_FRAG_IPV4 | \ + ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV4_TCP | \ + ETH_RSS_NONFRAG_IPV4_SCTP) + +#define RSS_IPV6_ENABLE ( \ + ETH_RSS_IPV6 | \ + ETH_RSS_FRAG_IPV6 | \ + ETH_RSS_NONFRAG_IPV6_UDP | \ + ETH_RSS_NONFRAG_IPV6_TCP | \ + ETH_RSS_NONFRAG_IPV6_SCTP) + +#define RSS_IPV6_EX_ENABLE ( \ + ETH_RSS_IPV6_EX | \ + ETH_RSS_IPV6_TCP_EX | \ + ETH_RSS_IPV6_UDP_EX) + +#define RSS_MAX_LEVELS 3 + +#define RSS_IPV4_INDEX 0 +#define RSS_IPV6_INDEX 1 +#define RSS_TCP_INDEX 2 +#define RSS_UDP_INDEX 3 +#define RSS_SCTP_INDEX 4 +#define RSS_DMAC_INDEX 5 + +uint32_t +otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss, + uint8_t rss_level) +{ + uint32_t flow_key_type[RSS_MAX_LEVELS][6] = { + { + FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6, + FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP, + FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC + }, + { + FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6, + FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP, + FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC + }, + { + FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4, + FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6, + FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP, + FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP, + FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP, + FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC + } + }; + uint32_t flowkey_cfg = 0; + + dev->rss_info.nix_rss = ethdev_rss; + + if (ethdev_rss & RSS_IPV4_ENABLE) + flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX]; + + if (ethdev_rss & RSS_IPV6_ENABLE) + flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX]; + + if (ethdev_rss & ETH_RSS_TCP) + flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX]; + + if (ethdev_rss & ETH_RSS_UDP) + flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX]; + + if (ethdev_rss & ETH_RSS_SCTP) + flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX]; + + if (ethdev_rss & ETH_RSS_L2_PAYLOAD) + flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX]; + + if (ethdev_rss & RSS_IPV6_EX_ENABLE) + flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT; + + if (ethdev_rss & ETH_RSS_PORT) + flowkey_cfg |= FLOW_KEY_TYPE_PORT; + + if (ethdev_rss & ETH_RSS_NVGRE) + flowkey_cfg |= FLOW_KEY_TYPE_NVGRE; + + if (ethdev_rss & ETH_RSS_VXLAN) + flowkey_cfg |= FLOW_KEY_TYPE_VXLAN; + + if (ethdev_rss & ETH_RSS_GENEVE) + flowkey_cfg |= FLOW_KEY_TYPE_GENEVE; + + return flowkey_cfg; +} + +int +otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg, + uint8_t *alg_idx, uint8_t group, int mcam_index) +{ + struct nix_rss_flowkey_cfg_rsp *rss_rsp; + struct otx2_mbox *mbox = dev->mbox; + struct nix_rss_flowkey_cfg *cfg; + int rc; + + rc = -EINVAL; + + dev->rss_info.flowkey_cfg = flowkey_cfg; + + cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox); + + cfg->flowkey_cfg = flowkey_cfg; + cfg->mcam_index = mcam_index; /* -1 indicates default group */ + cfg->group = group; /* 0 is default group */ + + rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp); + if (rc) + return rc; + + if (alg_idx) + *alg_idx = rss_rsp->alg_idx; + + return rc; +} + +int +otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t flowkey_cfg; + uint8_t alg_idx; + int rc; + + rc = -EINVAL; + + if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) { + otx2_err("Hash key size mismatch %d vs %d", + rss_conf->rss_key_len, NIX_HASH_KEY_SIZE); + goto fail; + } + + if (rss_conf->rss_key) + otx2_nix_rss_set_key(dev, rss_conf->rss_key, + (uint32_t)rss_conf->rss_key_len); + + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, 0); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx, + NIX_DEFAULT_RSS_CTX_GROUP, + NIX_DEFAULT_RSS_MCAM_IDX); + if (rc) { + otx2_err("Failed to set RSS hash function rc=%d", rc); + return rc; + } + + dev->rss_info.alg_idx = alg_idx; + +fail: + return rc; +} + +int +otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + if (rss_conf->rss_key) + rss_get_key(dev, rss_conf->rss_key); + + rss_conf->rss_key_len = NIX_HASH_KEY_SIZE; + rss_conf->rss_hf = dev->rss_info.nix_rss; + + return 0; +} + +int +otx2_nix_rss_config(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t idx, qcnt = eth_dev->data->nb_rx_queues; + uint32_t flowkey_cfg; + uint64_t rss_hf; + uint8_t alg_idx; + int rc; + + /* Skip further configuration if selected mode is not RSS */ + if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) + return 0; + + /* Update default RSS key and cfg */ + otx2_nix_rss_set_key(dev, NULL, 0); + + /* Update default RSS RETA */ + for (idx = 0; idx < dev->rss_info.rss_size; idx++) + dev->rss_info.ind_tbl[idx] = idx % qcnt; + + /* Init RSS table context */ + rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl); + if (rc) { + otx2_err("Failed to init RSS table rc=%d", rc); + return rc; + } + + rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf; + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, 0); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx, + NIX_DEFAULT_RSS_CTX_GROUP, + NIX_DEFAULT_RSS_MCAM_IDX); + if (rc) { + otx2_err("Failed to set RSS hash function rc=%d", rc); + return rc; + } + + dev->rss_info.alg_idx = alg_idx; + + return 0; +} From patchwork Sun Jun 30 18:05:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55693 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 834F01B9EB; Sun, 30 Jun 2019 20:08:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E670C1B9CD for ; Sun, 30 Jun 2019 20:07:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5cCJ016218; Sun, 30 Jun 2019 11:07:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=yO6TVaPMy/GzI7AWzL98HezON4bawtI1zuWgNI90IZ0=; b=jdEDaSsTQ9DgYl/hFMdm26+eL5OCqnY/+fFhnI9qCYpsG8CwVsC1jwY4bv3tyC8boLvL EDrccaUFJtLdK921IYx8W5c95o7uStuFRM0d4z7iM+ZymjfQXSbzJEVYvmTZztK6ujyf f+pd3SzCqiDZrWx4iPTM3CNnTTJsZT0uwdr4bpkbFrU1VeRj+Vug7Bi/G6y7Kn4sApTT LYiuIlAryR8ECIhlgHWCqAnE8CmjexMLl9PMP0ClBoz1CFBXWtSVXDLQSIBB/nXBikc7 tUnn+WvQkWD4sIHOLHSwCf/9GC6+T608E+yQlkLEoy07tpTO3TCIohk4yypqXkJD3U4G HA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3y9f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:28 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:26 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:26 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C87FE3F703F; Sun, 30 Jun 2019 11:07:23 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K , Thomas Monjalon CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:29 +0530 Message-ID: <20190630180609.36705-18-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 17/57] net/octeontx2: add Rx queue setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Rx queue setup and release. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/Makefile | 2 +- drivers/net/octeontx2/otx2_ethdev.c | 310 +++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 51 ++++ drivers/net/octeontx2/otx2_ethdev_ops.c | 2 + mk/rte.app.mk | 2 +- 8 files changed, 368 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index f2d47d57b..d0a2204d2 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -10,6 +10,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Runtime Rx queue setup = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index a67353d2a..64125a73f 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -10,6 +10,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Runtime Rx queue setup = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 97d66ddde..acda5e680 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -9,6 +9,7 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +Runtime Rx queue setup = Y RSS hash = Y RSS key update = Y RSS reta update = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index f9f9ae6e6..f40561afb 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -39,6 +39,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_ethdev_devargs.c LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 -lrte_eal -LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs +LDLIBS += -lrte_ethdev -lrte_bus_pci -lrte_kvargs -lrte_mbuf -lrte_mempool -lm include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 5289c79e8..dbbc2263d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -2,9 +2,15 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include +#include + #include #include #include +#include +#include +#include #include "otx2_ethdev.h" @@ -114,6 +120,308 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +static inline void +nix_rx_queue_reset(struct otx2_eth_rxq *rxq) +{ + rxq->head = 0; + rxq->available = 0; +} + +static inline uint32_t +nix_qsize_to_val(enum nix_q_size_e qsize) +{ + return (16UL << (qsize * 2)); +} + +static inline enum nix_q_size_e +nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val) +{ + int i; + + if (otx2_ethdev_fixup_is_min_4k_q(dev)) + i = nix_q_size_4K; + else + i = nix_q_size_16; + + for (; i < nix_q_size_max; i++) + if (val <= nix_qsize_to_val(i)) + break; + + if (i >= nix_q_size_max) + i = nix_q_size_max - 1; + + return i; +} + +static int +nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, + uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp) +{ + struct otx2_mbox *mbox = dev->mbox; + const struct rte_memzone *rz; + uint32_t ring_size, cq_size; + struct nix_aq_enq_req *aq; + uint16_t first_skip; + int rc; + + cq_size = rxq->qlen; + ring_size = cq_size * NIX_CQ_ENTRY_SZ; + rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size, + NIX_CQ_ALIGN, dev->node); + if (rz == NULL) { + otx2_err("Failed to allocate mem for cq hw ring"); + rc = -ENOMEM; + goto fail; + } + memset(rz->addr, 0, rz->len); + rxq->desc = (uintptr_t)rz->addr; + rxq->qmask = cq_size - 1; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_INIT; + + aq->cq.ena = 1; + aq->cq.caching = 1; + aq->cq.qsize = rxq->qsize; + aq->cq.base = rz->iova; + aq->cq.avg_level = 0xff; + aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); + aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); + + /* Many to one reduction */ + aq->cq.qint_idx = qid % dev->qints; + + if (otx2_ethdev_fixup_is_limit_cq_full(dev)) { + uint16_t min_rx_drop; + const float rx_cq_skid = 1024 * 256; + + min_rx_drop = ceil(rx_cq_skid / (float)cq_size); + aq->cq.drop = min_rx_drop; + aq->cq.drop_ena = 1; + } + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to init cq context"); + goto fail; + } + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_INIT; + + aq->rq.sso_ena = 0; + aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */ + aq->rq.spb_ena = 0; + aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id); + first_skip = (sizeof(struct rte_mbuf)); + first_skip += RTE_PKTMBUF_HEADROOM; + first_skip += rte_pktmbuf_priv_size(mp); + rxq->data_off = first_skip; + + first_skip /= 8; /* Expressed in number of dwords */ + aq->rq.first_skip = first_skip; + aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8); + aq->rq.flow_tagw = 32; /* 32-bits */ + aq->rq.lpb_sizem1 = rte_pktmbuf_data_room_size(mp); + aq->rq.lpb_sizem1 += rte_pktmbuf_priv_size(mp); + aq->rq.lpb_sizem1 += sizeof(struct rte_mbuf); + aq->rq.lpb_sizem1 /= 8; + aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.ena = 1; + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + aq->rq.rq_int_ena = 0; + /* Many to one reduction */ + aq->rq.qint_idx = qid % dev->qints; + + if (otx2_ethdev_fixup_is_limit_cq_full(dev)) + aq->rq.xqe_drop_ena = 1; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to init rq context"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + int rc; + + /* RQ is already disabled */ + /* Disable CQ */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->cq.ena = 0; + aq->cq_mask.ena = ~(aq->cq_mask.ena); + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to disable cq context"); + return rc; + } + + return 0; +} + +static inline int +nix_get_data_off(struct otx2_eth_dev *dev) +{ + RTE_SET_USED(dev); + + return 0; +} + +uint64_t +otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id) +{ + struct rte_mbuf mb_def; + uint64_t *tmp; + + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) - + offsetof(struct rte_mbuf, data_off) != 2); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) - + offsetof(struct rte_mbuf, data_off) != 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) - + offsetof(struct rte_mbuf, data_off) != 6); + mb_def.nb_segs = 1; + mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev); + mb_def.port = port_id; + rte_mbuf_refcnt_set(&mb_def, 1); + + /* Prevent compiler reordering: rearm_data covers previous fields */ + rte_compiler_barrier(); + tmp = (uint64_t *)&mb_def.rearm_data; + + return *tmp; +} + +static void +otx2_nix_rx_queue_release(void *rx_queue) +{ + struct otx2_eth_rxq *rxq = rx_queue; + + if (!rxq) + return; + + otx2_nix_dbg("Releasing rxq %u", rxq->rq); + nix_cq_rq_uninit(rxq->eth_dev, rxq); + rte_free(rx_queue); +} + +static int +otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, + uint16_t nb_desc, unsigned int socket, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_mempool_ops *ops; + struct otx2_eth_rxq *rxq; + const char *platform_ops; + enum nix_q_size_e qsize; + uint64_t offloads; + int rc; + + rc = -EINVAL; + + /* Compile time check to make sure all fast path elements in a CL */ + RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128); + + /* Sanity checks */ + if (rx_conf->rx_deferred_start == 1) { + otx2_err("Deferred Rx start is not supported"); + goto fail; + } + + platform_ops = rte_mbuf_platform_mempool_ops(); + /* This driver needs octeontx2_npa mempool ops to work */ + ops = rte_mempool_get_ops(mp->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + otx2_err("mempool ops should be of octeontx2_npa type"); + goto fail; + } + + if (mp->pool_id == 0) { + otx2_err("Invalid pool_id"); + goto fail; + } + + /* Free memory prior to re-allocation if needed */ + if (eth_dev->data->rx_queues[rq] != NULL) { + otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq); + otx2_nix_rx_queue_release(eth_dev->data->rx_queues[rq]); + eth_dev->data->rx_queues[rq] = NULL; + } + + offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads; + dev->rx_offloads |= offloads; + + /* Find the CQ queue size */ + qsize = nix_qsize_clampup_get(dev, nb_desc); + /* Allocate rxq memory */ + rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket); + if (rxq == NULL) { + otx2_err("Failed to allocate rq=%d", rq); + rc = -ENOMEM; + goto fail; + } + + rxq->eth_dev = eth_dev; + rxq->rq = rq; + rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR; + rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS); + rxq->wdata = (uint64_t)rq << 32; + rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id); + rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev, + eth_dev->data->port_id); + rxq->offloads = offloads; + rxq->pool = mp; + rxq->qlen = nix_qsize_to_val(qsize); + rxq->qsize = qsize; + + /* Alloc completion queue */ + rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp); + if (rc) { + otx2_err("Failed to allocate rxq=%u", rq); + goto free_rxq; + } + + rxq->qconf.socket_id = socket; + rxq->qconf.nb_desc = nb_desc; + rxq->qconf.mempool = mp; + memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf)); + + nix_rx_queue_reset(rxq); + otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d", + rq, mp->name, qsize, nb_desc, rxq->qlen); + + eth_dev->data->rx_queues[rq] = rxq; + eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; + +free_rxq: + otx2_nix_rx_queue_release(rxq); +fail: + return rc; +} + static int otx2_nix_configure(struct rte_eth_dev *eth_dev) { @@ -241,6 +549,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, .link_update = otx2_nix_link_update, + .rx_queue_setup = otx2_nix_rx_queue_setup, + .rx_queue_release = otx2_nix_rx_queue_release, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 19a4e45b0..a09393336 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -10,6 +10,9 @@ #include #include #include +#include +#include +#include #include "otx2_common.h" #include "otx2_dev.h" @@ -68,6 +71,7 @@ #define NIX_RX_MIN_DESC_ALIGN 16 #define NIX_RX_NB_SEG_MAX 6 #define NIX_CQ_ENTRY_SZ 128 +#define NIX_CQ_ALIGN 512 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -116,6 +120,19 @@ #define NIX_DEFAULT_RSS_CTX_GROUP 0 #define NIX_DEFAULT_RSS_MCAM_IDX -1 +enum nix_q_size_e { + nix_q_size_16, /* 16 entries */ + nix_q_size_64, /* 64 entries */ + nix_q_size_256, + nix_q_size_1K, + nix_q_size_4K, + nix_q_size_16K, + nix_q_size_64K, + nix_q_size_256K, + nix_q_size_1M, /* Million entries */ + nix_q_size_max +}; + struct otx2_qint { struct rte_eth_dev *eth_dev; uint8_t qintx; @@ -131,6 +148,16 @@ struct otx2_rss_info { uint8_t key[NIX_HASH_KEY_SIZE]; }; +struct otx2_eth_qconf { + union { + struct rte_eth_txconf tx; + struct rte_eth_rxconf rx; + } conf; + void *mempool; + uint32_t socket_id; + uint16_t nb_desc; +}; + struct otx2_npc_flow_info { uint16_t channel; /*rx channel */ uint16_t flow_prealloc_size; @@ -177,6 +204,29 @@ struct otx2_eth_dev { struct rte_eth_dev *eth_dev; } __rte_cache_aligned; +struct otx2_eth_rxq { + uint64_t mbuf_initializer; + uint64_t data_off; + uintptr_t desc; + void *lookup_mem; + uintptr_t cq_door; + uint64_t wdata; + int64_t *cq_status; + uint32_t head; + uint32_t qmask; + uint32_t available; + uint16_t rq; + struct otx2_timesync_info *tstamp; + MARKER slow_path_start; + uint64_t aura; + uint64_t offloads; + uint32_t qlen; + struct rte_mempool *pool; + enum nix_q_size_e qsize; + struct rte_eth_dev *eth_dev; + struct otx2_eth_qconf qconf; +} __rte_cache_aligned; + static inline struct otx2_eth_dev * otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) { @@ -192,6 +242,7 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); +uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id); /* Link */ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 301a597f8..71d36b44a 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -143,4 +143,6 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G; + + devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP; } diff --git a/mk/rte.app.mk b/mk/rte.app.mk index fab72ff6a..a852e5157 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -196,7 +196,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2 _LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null -_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 +_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 -lm _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring From patchwork Sun Jun 30 18:05:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55694 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFB841B9FA; Sun, 30 Jun 2019 20:08:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 5E2ED1B9C6 for ; Sun, 30 Jun 2019 20:07:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5DtQ027158; Sun, 30 Jun 2019 11:07:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Nu0mD+wmD5dHsdykVF6xAVu74pLVtX6Kvx1M9GLpVqU=; b=M2GzJmHuJcr7awYzvV8F69wT61Bu1+0WfVq9Ovju8ElU+M7e5nBlivJCMOl4VFWcORe1 I8mGKsTxu55pqOk0wZIAz/WIoRJgI7yEoZOVBa2B6MOojX4luoEZxJzgOpR92cxOawea OAGoNPE4+o2TR4rcajdcK8C3WOmGg+jQfA37EhKWF8/A9rltSLw2XR0GjIxXYrK1axR9 LLW2eAfhbEtG1F3eIq57Uvc4QlvBHDATYrc+JcBNt+kOPMCLtLkS0lmaW61PucI/Gutx Z3OXqQUhcpkcSmq9nFyHxAfVh/LDnYa/O74hCY1msk2xQWTlOv4CQ9rCmkmWClqpdSDS vg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gga-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:30 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:29 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:29 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 1FCEF3F703F; Sun, 30 Jun 2019 11:07:26 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K Date: Sun, 30 Jun 2019 23:35:30 +0530 Message-ID: <20190630180609.36705-19-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 18/57] net/octeontx2: add Tx queue setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Tx queue setup and release. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 385 ++++++++++++++++++++- drivers/net/octeontx2/otx2_ethdev.h | 25 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 3 +- drivers/net/octeontx2/otx2_tx.h | 28 ++ 8 files changed, 443 insertions(+), 2 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_tx.h diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index d0a2204d2..c8f07fa1d 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -11,6 +11,7 @@ Multiprocess aware = Y Link status = Y Link status event = Y Runtime Rx queue setup = Y +Runtime Tx queue setup = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 64125a73f..a98b7d523 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -11,6 +11,7 @@ Multiprocess aware = Y Link status = Y Link status event = Y Runtime Rx queue setup = Y +Runtime Tx queue setup = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index acda5e680..9746357ce 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -10,6 +10,7 @@ Multiprocess aware = Y Link status = Y Link status event = Y Runtime Rx queue setup = Y +Runtime Tx queue setup = Y RSS hash = Y RSS key update = Y RSS reta update = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 3bee3f3ca..d7e8f3d56 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -19,6 +19,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Promiscuous mode - SR-IOV VF - Lock-free Tx queue +- Multiple queues for TX and RX - Receiver Side Scaling (RSS) - MAC filtering - Port hardware statistics diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index dbbc2263d..62943cc31 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -422,6 +422,373 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, return rc; } +static inline uint8_t +nix_sq_max_sqe_sz(struct otx2_eth_txq *txq) +{ + /* + * Maximum three segments can be supported with W8, Choose + * NIX_MAXSQESZ_W16 for multi segment offload. + */ + if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) + return NIX_MAXSQESZ_W16; + else + return NIX_MAXSQESZ_W8; +} + +static int +nix_sq_init(struct otx2_eth_txq *txq) +{ + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *sq; + + if (txq->sqb_pool->pool_id == 0) + return -EINVAL; + + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + sq->qidx = txq->sq; + sq->ctype = NIX_AQ_CTYPE_SQ; + sq->op = NIX_AQ_INSTOP_INIT; + sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq); + + sq->sq.default_chan = dev->tx_chan_base; + sq->sq.sqe_stype = NIX_STYPE_STF; + sq->sq.ena = 1; + if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8) + sq->sq.sqe_stype = NIX_STYPE_STP; + sq->sq.sqb_aura = + npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id); + sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR); + sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL); + sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR); + sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR); + + /* Many to one reduction */ + sq->sq.qint_idx = txq->sq % dev->qints; + + return otx2_mbox_process(mbox); +} + +static int +nix_sq_uninit(struct otx2_eth_txq *txq) +{ + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct ndc_sync_op *ndc_req; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + uint16_t sqes_per_sqb; + void *sqb_buf; + int rc, count; + + otx2_nix_dbg("Cleaning up sq %u", txq->sq); + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Check if sq is already cleaned up */ + if (!rsp->sq.ena) + return 0; + + /* Disable sq */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->sq_mask.ena = ~aq->sq_mask.ena; + aq->sq.ena = 0; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + /* Read SQ and free sqb's */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (aq->sq.smq_pend) + otx2_err("SQ has pending sqe's"); + + count = aq->sq.sqb_count; + sqes_per_sqb = 1 << txq->sqes_per_sqb_log2; + /* Free SQB's that are used */ + sqb_buf = (void *)rsp->sq.head_sqb; + while (count) { + void *next_sqb; + + next_sqb = *(void **)((uintptr_t)sqb_buf + ((sqes_per_sqb - 1) * + nix_sq_max_sqe_sz(txq))); + npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1, + (uint64_t)sqb_buf); + sqb_buf = next_sqb; + count--; + } + + /* Free next to use sqb */ + if (rsp->sq.next_sqb) + npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1, + rsp->sq.next_sqb); + + /* Sync NDC-NIX-TX for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->nix_lf_tx_sync = 1; + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc); + + return rc; +} + +static int +nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc) +{ + struct otx2_eth_dev *dev = txq->dev; + uint16_t sqes_per_sqb, nb_sqb_bufs; + char name[RTE_MEMPOOL_NAMESIZE]; + struct rte_mempool_objsz sz; + struct npa_aura_s *aura; + uint32_t tmp, blk_sz; + + aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN); + snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq); + blk_sz = dev->sqb_size; + + if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16) + sqes_per_sqb = (dev->sqb_size / 8) / 16; + else + sqes_per_sqb = (dev->sqb_size / 8) / 8; + + nb_sqb_bufs = nb_desc / sqes_per_sqb; + /* Clamp up to devarg passed SQB count */ + nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_MIN_SQB, + nb_sqb_bufs + NIX_SQB_LIST_SPACE)); + + txq->sqb_pool = rte_mempool_create_empty(name, nb_sqb_bufs, blk_sz, + 0, 0, dev->node, + MEMPOOL_F_NO_SPREAD); + txq->nb_sqb_bufs = nb_sqb_bufs; + txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb); + txq->nb_sqb_bufs_adj = nb_sqb_bufs - + RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb; + txq->nb_sqb_bufs_adj = + (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100; + + if (txq->sqb_pool == NULL) { + otx2_err("Failed to allocate sqe mempool"); + goto fail; + } + + memset(aura, 0, sizeof(*aura)); + aura->fc_ena = 1; + aura->fc_addr = txq->fc_iova; + aura->fc_hyst_bits = 0; /* Store count on all updates */ + if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) { + otx2_err("Failed to set ops for sqe mempool"); + goto fail; + } + if (rte_mempool_populate_default(txq->sqb_pool) < 0) { + otx2_err("Failed to populate sqe mempool"); + goto fail; + } + + tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz); + if (dev->sqb_size != sz.elt_size) { + otx2_err("sqe pool block size is not expected %d != %d", + dev->sqb_size, tmp); + goto fail; + } + + return 0; +fail: + return -ENOMEM; +} + +void +otx2_nix_form_default_desc(struct otx2_eth_txq *txq) +{ + struct nix_send_ext_s *send_hdr_ext; + struct nix_send_hdr_s *send_hdr; + struct nix_send_mem_s *send_mem; + union nix_send_sg_s *sg; + + /* Initialize the fields based on basic single segment packet */ + memset(&txq->cmd, 0, sizeof(txq->cmd)); + + if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) { + send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0]; + /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */ + send_hdr->w0.sizem1 = 2; + + send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2]; + send_hdr_ext->w0.subdc = NIX_SUBDC_EXT; + if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) { + /* Default: one seg packet would have: + * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM) + * => 8/2 - 1 = 3 + */ + send_hdr->w0.sizem1 = 3; + send_hdr_ext->w0.tstmp = 1; + + /* To calculate the offset for send_mem, + * send_hdr->w0.sizem1 * 2 + */ + send_mem = (struct nix_send_mem_s *)(txq->cmd + + (send_hdr->w0.sizem1 << 1)); + send_mem->subdc = NIX_SUBDC_MEM; + send_mem->dsz = 0x0; + send_mem->wmem = 0x1; + send_mem->alg = NIX_SENDMEMALG_SETTSTMP; + } + sg = (union nix_send_sg_s *)&txq->cmd[4]; + } else { + send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0]; + /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */ + send_hdr->w0.sizem1 = 1; + sg = (union nix_send_sg_s *)&txq->cmd[2]; + } + + send_hdr->w0.sq = txq->sq; + sg->subdc = NIX_SUBDC_SG; + sg->segs = 1; + sg->ld_type = NIX_SENDLDTYPE_LDD; + + rte_smp_wmb(); +} + +static void +otx2_nix_tx_queue_release(void *_txq) +{ + struct otx2_eth_txq *txq = _txq; + + if (!txq) + return; + + otx2_nix_dbg("Releasing txq %u", txq->sq); + + /* Free sqb's and disable sq */ + nix_sq_uninit(txq); + + if (txq->sqb_pool) { + rte_mempool_free(txq->sqb_pool); + txq->sqb_pool = NULL; + } + rte_free(txq); +} + + +static int +otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + const struct rte_memzone *fc; + struct otx2_eth_txq *txq; + uint64_t offloads; + int rc; + + rc = -EINVAL; + + /* Compile time check to make sure all fast path elements in a CL */ + RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128); + + if (tx_conf->tx_deferred_start) { + otx2_err("Tx deferred start is not supported"); + goto fail; + } + + /* Free memory prior to re-allocation if needed. */ + if (eth_dev->data->tx_queues[sq] != NULL) { + otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq); + otx2_nix_tx_queue_release(eth_dev->data->tx_queues[sq]); + eth_dev->data->tx_queues[sq] = NULL; + } + + /* Find the expected offloads for this queue */ + offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads; + + /* Allocating tx queue data structure */ + txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq), + OTX2_ALIGN, socket_id); + if (txq == NULL) { + otx2_err("Failed to alloc txq=%d", sq); + rc = -ENOMEM; + goto fail; + } + txq->sq = sq; + txq->dev = dev; + txq->sqb_pool = NULL; + txq->offloads = offloads; + dev->tx_offloads |= offloads; + + /* + * Allocate memory for flow control updates from HW. + * Alloc one cache line, so that fits all FC_STYPE modes. + */ + fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq, + OTX2_ALIGN + sizeof(struct npa_aura_s), + OTX2_ALIGN, dev->node); + if (fc == NULL) { + otx2_err("Failed to allocate mem for fcmem"); + rc = -ENOMEM; + goto free_txq; + } + txq->fc_iova = fc->iova; + txq->fc_mem = fc->addr; + + /* Initialize the aura sqb pool */ + rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc); + if (rc) { + otx2_err("Failed to alloc sqe pool rc=%d", rc); + goto free_txq; + } + + /* Initialize the SQ */ + rc = nix_sq_init(txq); + if (rc) { + otx2_err("Failed to init sq=%d context", sq); + goto free_txq; + } + + txq->fc_cache_pkts = 0; + txq->io_addr = dev->base + NIX_LF_OP_SENDX(0); + /* Evenly distribute LMT slot for each sq */ + txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12)); + + txq->qconf.socket_id = socket_id; + txq->qconf.nb_desc = nb_desc; + memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf)); + + otx2_nix_form_default_desc(txq); + + otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 "" + " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq, + fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr, + txq->nb_sqb_bufs, txq->sqes_per_sqb_log2); + eth_dev->data->tx_queues[sq] = txq; + eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; + +free_txq: + otx2_nix_tx_queue_release(txq); +fail: + return rc; +} + + static int otx2_nix_configure(struct rte_eth_dev *eth_dev) { @@ -549,6 +916,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, .link_update = otx2_nix_link_update, + .tx_queue_setup = otx2_nix_tx_queue_setup, + .tx_queue_release = otx2_nix_tx_queue_release, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, .stats_get = otx2_nix_dev_stats_get, @@ -763,12 +1132,26 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct rte_pci_device *pci_dev; - int rc; + int rc, i; /* Nothing to be done for secondary processes */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Free up SQs */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); + eth_dev->data->tx_queues[i] = NULL; + } + eth_dev->data->nb_tx_queues = 0; + + /* Free up RQ's and CQ's */ + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + otx2_nix_rx_queue_release(eth_dev->data->rx_queues[i]); + eth_dev->data->rx_queues[i] = NULL; + } + eth_dev->data->nb_rx_queues = 0; + /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index a09393336..0ce67f634 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -19,6 +19,7 @@ #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" +#include "otx2_tx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -62,6 +63,7 @@ #define NIX_MAX_SQB 512 #define NIX_MIN_SQB 32 +#define NIX_SQB_LIST_SPACE 2 #define NIX_RSS_RETA_SIZE_MAX 256 /* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/ #define NIX_RSS_GRPS 8 @@ -72,6 +74,8 @@ #define NIX_RX_NB_SEG_MAX 6 #define NIX_CQ_ENTRY_SZ 128 #define NIX_CQ_ALIGN 512 +#define NIX_SQB_LOWER_THRESH 90 +#define LMT_SLOT_MASK 0x7f /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -204,6 +208,24 @@ struct otx2_eth_dev { struct rte_eth_dev *eth_dev; } __rte_cache_aligned; +struct otx2_eth_txq { + uint64_t cmd[8]; + int64_t fc_cache_pkts; + uint64_t *fc_mem; + void *lmt_addr; + rte_iova_t io_addr; + rte_iova_t fc_iova; + uint16_t sqes_per_sqb_log2; + int16_t nb_sqb_bufs_adj; + MARKER slow_path_start; + uint16_t nb_sqb_bufs; + uint16_t sq; + uint64_t offloads; + struct otx2_eth_dev *dev; + struct rte_mempool *sqb_pool; + struct otx2_eth_qconf qconf; +} __rte_cache_aligned; + struct otx2_eth_rxq { uint64_t mbuf_initializer; uint64_t data_off; @@ -329,4 +351,7 @@ int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev); +/* Rx and Tx routines */ +void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 71d36b44a..1c935b627 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -144,5 +144,6 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G; - devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP; + devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP | + RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP; } diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h new file mode 100644 index 000000000..4d0993f87 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tx.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TX_H__ +#define __OTX2_TX_H__ + +#define NIX_TX_OFFLOAD_NONE (0) +#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0) +#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1) +#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2) +#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3) +#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4) + +/* Flags to control xmit_prepare function. + * Defining it from backwards to denote its been + * not used as offload flags to pick function + */ +#define NIX_TX_MULTI_SEG_F BIT(15) + +#define NIX_TX_NEED_SEND_HDR_W1 \ + (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \ + NIX_TX_OFFLOAD_VLAN_QINQ_F) + +#define NIX_TX_NEED_EXT_HDR \ + (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F) + +#endif /* __OTX2_TX_H__ */ From patchwork Sun Jun 30 18:05:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55695 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4650F1BA0C; Sun, 30 Jun 2019 20:08:11 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C58E41B9D2 for ; Sun, 30 Jun 2019 20:07:33 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI72j8028445 for ; Sun, 30 Jun 2019 11:07:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=OTVF+PhY5VzoC0vM4FPikUzJmjOZwN7gNPnFewZgl8s=; b=Tbu2CpSTmcYYqW/vglpvBUz+Owp2fxN8lH1LAEqI4HAk41kocjda65W8rhQKt1gw14nv 6biV+ZJw57OIRAhnEbsTnV+ljUReyz+5Q3v4t9iIXkYfAAgTKgmeM3EiJ4pSANRph2xZ 2b8i2jgfIsHHNGYNsRs5G/VF3cAX1PiQQS7LFAUjyLE/3V0dYllVQPGGqXj7O8c52n9y vzMIL9QgxP43s/LXS5MZMQqagtc7NghPWebQxM6ylqmOoA1SyOWESannu4RGRMqMxNz7 5FEe2K7ziMBsSMf8NRtj+/ymOrK20Dgk8bYfFxKGjzJknYDnV1W96FjuTr1Lf25w1oBK cA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4ggg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:32 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 521863F703F; Sun, 30 Jun 2019 11:07:30 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:31 +0530 Message-ID: <20190630180609.36705-20-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 19/57] net/octeontx2: handle port reconfigure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru setup tx & rx queues with the previous configuration during port reconfig, it handles cases like port reconfigure without reconfiguring tx & rx queues. Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 2 + 2 files changed, 182 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 62943cc31..bc6e8fb8a 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -788,6 +788,172 @@ otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq, return rc; } +static int +nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_eth_qconf *tx_qconf = NULL; + struct otx2_eth_qconf *rx_qconf = NULL; + struct otx2_eth_txq **txq; + struct otx2_eth_rxq **rxq; + int i, nb_rxq, nb_txq; + + nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues); + nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues); + + tx_qconf = malloc(nb_txq * sizeof(*tx_qconf)); + if (tx_qconf == NULL) { + otx2_err("Failed to allocate memory for tx_qconf"); + goto fail; + } + + rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf)); + if (rx_qconf == NULL) { + otx2_err("Failed to allocate memory for rx_qconf"); + goto fail; + } + + txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues; + for (i = 0; i < nb_txq; i++) { + if (txq[i] == NULL) { + otx2_err("txq[%d] is already released", i); + goto fail; + } + memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf)); + otx2_nix_tx_queue_release(txq[i]); + eth_dev->data->tx_queues[i] = NULL; + } + + rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues; + for (i = 0; i < nb_rxq; i++) { + if (rxq[i] == NULL) { + otx2_err("rxq[%d] is already released", i); + goto fail; + } + memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf)); + otx2_nix_rx_queue_release(rxq[i]); + eth_dev->data->rx_queues[i] = NULL; + } + + dev->tx_qconf = tx_qconf; + dev->rx_qconf = rx_qconf; + return 0; + +fail: + if (tx_qconf) + free(tx_qconf); + if (rx_qconf) + free(rx_qconf); + + return -ENOMEM; +} + +static int +nix_restore_queue_cfg(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_eth_qconf *tx_qconf = dev->tx_qconf; + struct otx2_eth_qconf *rx_qconf = dev->rx_qconf; + struct otx2_eth_txq **txq; + struct otx2_eth_rxq **rxq; + int rc, i, nb_rxq, nb_txq; + + nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues); + nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues); + + rc = -ENOMEM; + /* Setup tx & rx queues with previous configuration so + * that the queues can be functional in cases like ports + * are started without re configuring queues. + * + * Usual re config sequence is like below: + * port_configure() { + * if(reconfigure) { + * queue_release() + * queue_setup() + * } + * queue_configure() { + * queue_release() + * queue_setup() + * } + * } + * port_start() + * + * In some application's control path, queue_configure() would + * NOT be invoked for TXQs/RXQs in port_configure(). + * In such cases, queues can be functional after start as the + * queues are already setup in port_configure(). + */ + for (i = 0; i < nb_txq; i++) { + rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc, + tx_qconf[i].socket_id, + &tx_qconf[i].conf.tx); + if (rc) { + otx2_err("Failed to setup tx queue rc=%d", rc); + txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues; + for (i -= 1; i >= 0; i--) + otx2_nix_tx_queue_release(txq[i]); + goto fail; + } + } + + free(tx_qconf); tx_qconf = NULL; + + for (i = 0; i < nb_rxq; i++) { + rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc, + rx_qconf[i].socket_id, + &rx_qconf[i].conf.rx, + rx_qconf[i].mempool); + if (rc) { + otx2_err("Failed to setup rx queue rc=%d", rc); + rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues; + for (i -= 1; i >= 0; i--) + otx2_nix_rx_queue_release(rxq[i]); + goto release_tx_queues; + } + } + + free(rx_qconf); rx_qconf = NULL; + + return 0; + +release_tx_queues: + txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues; + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + otx2_nix_tx_queue_release(txq[i]); +fail: + if (tx_qconf) + free(tx_qconf); + if (rx_qconf) + free(rx_qconf); + + return rc; +} + +static uint16_t +nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts) +{ + RTE_SET_USED(queue); + RTE_SET_USED(mbufs); + RTE_SET_USED(pkts); + + return 0; +} + +static void +nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev) +{ + /* These dummy functions are required for supporting + * some applications which reconfigure queues without + * stopping tx burst and rx burst threads(eg kni app) + * When the queues context is saved, txq/rxqs are released + * which caused app crash since rx/tx burst is still + * on different lcores + */ + eth_dev->tx_pkt_burst = nix_eth_nop_burst; + eth_dev->rx_pkt_burst = nix_eth_nop_burst; + rte_mb(); +} static int otx2_nix_configure(struct rte_eth_dev *eth_dev) @@ -844,6 +1010,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { oxt2_nix_unregister_queue_irqs(eth_dev); + nix_set_nop_rxtx_function(eth_dev); + rc = nix_store_queue_cfg_and_then_release(eth_dev); + if (rc) + goto fail; nix_lf_free(dev); } @@ -884,6 +1054,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* + * Restore queue config when reconfigure followed by + * reconfigure and no queue configure invoked from application case. + */ + if (dev->configured == 1) { + rc = nix_restore_queue_cfg(eth_dev); + if (rc) + goto free_nix_lf; + } + /* Update the mac address */ ea = eth_dev->data->mac_addrs; memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 0ce67f634..ffc350e0d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -205,6 +205,8 @@ struct otx2_eth_dev { uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; + struct otx2_eth_qconf *tx_qconf; + struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; } __rte_cache_aligned; From patchwork Sun Jun 30 18:05:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55696 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5C5D11BA46; Sun, 30 Jun 2019 20:08:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D51ED1B9D3 for ; Sun, 30 Jun 2019 20:07:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI640s016301; Sun, 30 Jun 2019 11:07:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=fGSiIV4wikJ6duiPBz1nT4j7qELYP9K5Pcyse6MXJlY=; b=fx3kSBdSUvlZenSRQ1u7JFFwp4UbF8n4cHxIk2lbt8E5rSaaU1SrTyNOuVXUFg7YRJ2f BliU7KrKOneJjDSqCCpxAjntHKz123ePXeTGkzhI1bXyXTOiYXcr3sG3Ct1fjOArVd7T HYnQREReGRfWVi27Sx3tDKynvmxnApqRANOA5BUbJ2LYWvU2zQjiETrOKbABKNFMtYGl LjHpNqw1KGEalmPfItrQHUlLMyiuteA6kHh9xsFGM2JcTD0TrZECYFK0qYJM+F0AXjOQ eD6EcDbGgMwz2a/+YMhC/GVgQZntEVry0IV6Pt7M5/bIjPB35i5LRZbekCls94q/zkzj aQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3ya7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:37 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:35 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2EAE03F703F; Sun, 30 Jun 2019 11:07:32 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:32 +0530 Message-ID: <20190630180609.36705-21-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 20/57] net/octeontx2: add queue start and stop operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add queue start and stop operations. Tx queue needs to update the flow control value, Which will be added in sub subsequent patch. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 92 ++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 2 + 5 files changed, 97 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index c8f07fa1d..ca40358da 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index a98b7d523..b720c116f 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 9746357ce..5a287493f 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,6 +11,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Queue start/stop = Y RSS hash = Y RSS key update = Y RSS reta update = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index bc6e8fb8a..c8271b1ab 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -252,6 +252,26 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, return rc; } +static int +nix_rq_enb_dis(struct rte_eth_dev *eth_dev, + struct otx2_eth_rxq *rxq, const bool enb) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + + /* Pkts will be dropped silently if RQ is disabled */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.ena = enb; + aq->rq_mask.ena = ~(aq->rq_mask.ena); + + return otx2_mbox_process(mbox); +} + static int nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) { @@ -1091,6 +1111,74 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) return rc; } +int +otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct rte_eth_dev_data *data = eth_dev->data; + + if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; +} + +int +otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct rte_eth_dev_data *data = eth_dev->data; + + if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) + return 0; + + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; +} + +static int +otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx]; + struct rte_eth_dev_data *data = eth_dev->data; + int rc; + + if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true); + if (rc) { + otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc); + goto done; + } + + data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + +done: + return rc; +} + +static int +otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx]; + struct rte_eth_dev_data *data = eth_dev->data; + int rc; + + if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) + return 0; + + rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false); + if (rc) { + otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc); + goto done; + } + + data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + +done: + return rc; +} + /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, @@ -1100,6 +1188,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_release = otx2_nix_tx_queue_release, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, + .tx_queue_start = otx2_nix_tx_queue_start, + .tx_queue_stop = otx2_nix_tx_queue_stop, + .rx_queue_start = otx2_nix_rx_queue_start, + .rx_queue_stop = otx2_nix_rx_queue_stop, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index ffc350e0d..4e06b7111 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -266,6 +266,8 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); +int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx); +int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx); uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id); /* Link */ From patchwork Sun Jun 30 18:05:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55697 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 06B961BA59; Sun, 30 Jun 2019 20:08:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 82D311B9D7 for ; Sun, 30 Jun 2019 20:07:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI50uG015649 for ; Sun, 30 Jun 2019 11:07:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=GKpsf9GH+HpEkHT3LwPERjeqkyvx9sCt4eVMt/GUfoE=; b=Uga93pAY9jWTwOzeCkaVTiZOXtuu3YPP2UzBHUUK2j6sJ4d4TdLoVanowjjl06h5KPVr +RUKcj0iT8nnRv9Xb8prMZMXuP4TGwExHzT2J6O3kBFe6P4SA6udTOmqliYH6Om6E6EL DfylMrCYMfv9Kqd4rDxIVuEADaCf9ULtPOwQO2bG84PRfJgUDErDmrYk7GelrTbSuhaK 4uVRwvZJbG1ND7ExmnPhzV+t6UAqoYMdx6/yGcrWBrNHOLgNelFj/D/MpX9idb242Gpt fHfiEpsqCnfhTtUxHQ+qstBvOYdzStBPmyey2LJbWq6wKBo/fnepjPDaa+emQWw7tpnj eQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yag-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:39 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:38 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:38 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 89C6A3F703F; Sun, 30 Jun 2019 11:07:36 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Krzysztof Kanas Date: Sun, 30 Jun 2019 23:35:33 +0530 Message-ID: <20190630180609.36705-22-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 21/57] net/octeontx2: introduce traffic manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Introduce traffic manager infra and default hierarchy creation. Upon ethdev configure, a default hierarchy is created with one-to-one mapped tm nodes. This topology will be overridden when user explicitly creates and commits a new hierarchy using rte_tm interface. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 16 ++ drivers/net/octeontx2/otx2_ethdev.h | 14 ++ drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 67 ++++++++ 6 files changed, 351 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_tm.c create mode 100644 drivers/net/octeontx2/otx2_tm.h diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index f40561afb..164621087 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -28,6 +28,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ otx2_link.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 8681a2642..e344d877f 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', 'otx2_link.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index c8271b1ab..899865749 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1034,6 +1034,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) rc = nix_store_queue_cfg_and_then_release(eth_dev); if (rc) goto fail; + otx2_nix_tm_fini(eth_dev); nix_lf_free(dev); } @@ -1067,6 +1068,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Init the default TM scheduler hierarchy */ + rc = otx2_nix_tm_init_default(eth_dev); + if (rc) { + otx2_err("Failed to init traffic manager rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -1369,6 +1377,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) /* Also sync same MAC address to CGX table */ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]); + /* Initialize the tm data structures */ + otx2_nix_tm_conf_init(eth_dev); + dev->tx_offload_capa = nix_get_tx_offload_capa(dev); dev->rx_offload_capa = nix_get_rx_offload_capa(dev); @@ -1424,6 +1435,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) } eth_dev->data->nb_rx_queues = 0; + /* Free tm resources */ + rc = otx2_nix_tm_fini(eth_dev); + if (rc) + otx2_err("Failed to cleanup tm, rc=%d", rc); + /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 4e06b7111..9f73bf89b 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -19,6 +19,7 @@ #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" +#include "otx2_tm.h" #include "otx2_tx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -201,6 +202,19 @@ struct otx2_eth_dev { uint64_t rx_offload_capa; uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; + uint16_t txschq[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_index[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT]; + /* Dis-contiguous queues */ + uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + /* Contiguous queues */ + uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + uint16_t otx2_tm_root_lvl; + uint16_t tm_flags; + uint16_t tm_leaf_cnt; + struct otx2_nix_tm_node_list node_list; + struct otx2_nix_tm_shaper_profile_list shaper_profile_list; struct otx2_rss_info rss_info; uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c new file mode 100644 index 000000000..bc0474242 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.c @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" +#include "otx2_tm.h" + +/* Use last LVL_CNT nodes as default nodes */ +#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT) + +enum otx2_tm_node_level { + OTX2_TM_LVL_ROOT = 0, + OTX2_TM_LVL_SCH1, + OTX2_TM_LVL_SCH2, + OTX2_TM_LVL_SCH3, + OTX2_TM_LVL_SCH4, + OTX2_TM_LVL_QUEUE, + OTX2_TM_LVL_MAX, +}; + +static bool +nix_tm_have_tl1_access(struct otx2_eth_dev *dev) +{ + bool is_lbk = otx2_dev_is_lbk(dev); + return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) && + !is_lbk && !dev->maxvf; +} + +static struct otx2_nix_tm_shaper_profile * +nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) +{ + struct otx2_nix_tm_shaper_profile *tm_shaper_profile; + + TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) { + if (tm_shaper_profile->shaper_profile_id == shaper_id) + return tm_shaper_profile; + } + return NULL; +} + +static struct otx2_nix_tm_node * +nix_tm_node_search(struct otx2_eth_dev *dev, + uint32_t node_id, bool user) +{ + struct otx2_nix_tm_node *tm_node; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->id == node_id && + (user == !!(tm_node->flags & NIX_TM_NODE_USER))) + return tm_node; + } + return NULL; +} + +static int +nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint16_t hw_lvl_id, + uint16_t level_id, bool user, + struct rte_tm_node_params *params) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + struct otx2_nix_tm_node *tm_node, *parent_node; + uint32_t shaper_profile_id; + + shaper_profile_id = params->shaper_profile_id; + shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id); + + parent_node = nix_tm_node_search(dev, parent_node_id, user); + + tm_node = rte_zmalloc("otx2_nix_tm_node", + sizeof(struct otx2_nix_tm_node), 0); + if (!tm_node) + return -ENOMEM; + + tm_node->level_id = level_id; + tm_node->hw_lvl_id = hw_lvl_id; + + tm_node->id = node_id; + tm_node->priority = priority; + tm_node->weight = weight; + tm_node->rr_prio = 0xf; + tm_node->max_prio = UINT32_MAX; + tm_node->hw_id = UINT32_MAX; + tm_node->flags = 0; + if (user) + tm_node->flags = NIX_TM_NODE_USER; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); + + if (shaper_profile) + shaper_profile->reference_count++; + tm_node->parent = parent_node; + tm_node->parent_hw_id = UINT32_MAX; + + TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node); + + return 0; +} + +static int +nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + + while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) { + if (shaper_profile->reference_count) + otx2_tm_dbg("Shaper profile %u has non zero references", + shaper_profile->shaper_profile_id); + TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper); + rte_free(shaper_profile); + } + + return 0; +} + +static int +nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t def = eth_dev->data->nb_tx_queues; + struct rte_tm_node_params params; + uint32_t leaf_parent, i; + int rc = 0; + + /* Default params */ + memset(¶ms, 0, sizeof(params)); + params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE; + + if (nix_tm_have_tl1_access(dev)) { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL1, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH4, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 4; + } else { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 3; + } + + /* Add leaf nodes */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_CNT, + OTX2_TM_LVL_QUEUE, false, ¶ms); + if (rc) + break; + } + +exit: + return rc; +} + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + TAILQ_INIT(&dev->node_list); + TAILQ_INIT(&dev->shaper_profile_list); +} + +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint16_t sq_cnt = eth_dev->data->nb_tx_queues; + int rc; + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + dev->tm_flags = NIX_TM_DEFAULT_TREE; + + rc = nix_tm_prepare_default_tree(eth_dev); + if (rc != 0) + return rc; + + dev->tm_leaf_cnt = sq_cnt; + + return 0; +} + +int +otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + + dev->tm_flags = 0; + return 0; +} diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h new file mode 100644 index 000000000..94023fa99 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TM_H__ +#define __OTX2_TM_H__ + +#include + +#include + +#define NIX_TM_DEFAULT_TREE BIT_ULL(0) + +struct otx2_eth_dev; + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); + +struct otx2_nix_tm_node { + TAILQ_ENTRY(otx2_nix_tm_node) node; + uint32_t id; + uint32_t hw_id; + uint32_t priority; + uint32_t weight; + uint16_t level_id; + uint16_t hw_lvl_id; + uint32_t rr_prio; + uint32_t rr_num; + uint32_t max_prio; + uint32_t parent_hw_id; + uint32_t flags; +#define NIX_TM_NODE_HWRES BIT_ULL(0) +#define NIX_TM_NODE_ENABLED BIT_ULL(1) +#define NIX_TM_NODE_USER BIT_ULL(2) + struct otx2_nix_tm_node *parent; + struct rte_tm_node_params params; +}; + +struct otx2_nix_tm_shaper_profile { + TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper; + uint32_t shaper_profile_id; + uint32_t reference_count; + struct rte_tm_shaper_params profile; +}; + +struct shaper_params { + uint64_t burst_exponent; + uint64_t burst_mantissa; + uint64_t div_exp; + uint64_t exponent; + uint64_t mantissa; + uint64_t burst; + uint64_t rate; +}; + +TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node); +TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); + +#define MAX_SCHED_WEIGHT ((uint8_t)~0) +#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1) + +/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */ +/* = NIX_MAX_HW_MTU */ +#define DEFAULT_RR_WEIGHT 71 + +#endif /* __OTX2_TM_H__ */ From patchwork Sun Jun 30 18:05:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55698 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DBAE81BA68; Sun, 30 Jun 2019 20:08:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D23A51B9E6 for ; Sun, 30 Jun 2019 20:07:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI640t016301 for ; Sun, 30 Jun 2019 11:07:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ZjXhgFVEu4kYcft6J/VFZjSZcNANJ/iTLYSxZ3MhKVE=; b=rnmP9YoRnXSuclR2e1E/dYsWDY1eD46wW6v3OHW7NfiMixbGCpV+VVOQ08S/Y4aPDmKp cG71/waAt9jMKRoevFSTfx2Wo2NckGPNMzxdodsDffuiBZ8IKKHCu72SlXCbLIlDwGQY Eh2dVUyhPwxhxL96Tg+TEjG5PRcG6fBznPqVOT0RI60Q6U0TQe4K0v2hsERab+93RDr5 SWH8MzkPR2KFHhw55Zw0OqcnKAwlwjLC+Jm5arg9r4GHNJyLMiTpzywwbuiirtRKYvvb YB8cwpMtYUPoGZw0UDI1yOobHK8NRximIhXewysCNQEdhe82hz1n53Fb5f2XB1YtOgRu zg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yan-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:43 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:41 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:41 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 706273F703F; Sun, 30 Jun 2019 11:07:39 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Krzysztof Kanas Date: Sun, 30 Jun 2019 23:35:34 +0530 Message-ID: <20190630180609.36705-23-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 22/57] net/octeontx2: alloc and free TM HW resources X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Krzysztof Kanas Allocate and free shaper/scheduler hardware resources for nodes of hirearchy levels in sw. Signed-off-by: Krzysztof Kanas Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_tm.c | 350 ++++++++++++++++++++++++++++++++ 1 file changed, 350 insertions(+) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index bc0474242..91f31df05 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -54,6 +54,69 @@ nix_tm_node_search(struct otx2_eth_dev *dev, return NULL; } +static uint32_t +check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id) +{ + struct otx2_nix_tm_node *tm_node; + uint32_t rr_num = 0; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (!tm_node->parent) + continue; + + if (!(tm_node->parent->id == parent_id)) + continue; + + if (tm_node->priority == priority) + rr_num++; + } + return rr_num; +} + +static int +nix_tm_update_parent_info(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_node *tm_node_child; + struct otx2_nix_tm_node *tm_node; + struct otx2_nix_tm_node *parent; + uint32_t rr_num = 0; + uint32_t priority; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (!tm_node->parent) + continue; + /* Count group of children of same priority i.e are RR */ + parent = tm_node->parent; + priority = tm_node->priority; + rr_num = check_rr(dev, priority, parent->id); + + /* Assuming that multiple RR groups are + * not configured based on capability. + */ + if (rr_num > 1) { + parent->rr_prio = priority; + parent->rr_num = rr_num; + } + + /* Find out static priority children that are not in RR */ + TAILQ_FOREACH(tm_node_child, &dev->node_list, node) { + if (!tm_node_child->parent) + continue; + if (parent->id != tm_node_child->parent->id) + continue; + if (parent->max_prio == UINT32_MAX && + tm_node_child->priority != parent->rr_prio) + parent->max_prio = 0; + + if (parent->max_prio < tm_node_child->priority && + parent->rr_prio != tm_node_child->priority) + parent->max_prio = tm_node_child->priority; + } + } + + return 0; +} + static int nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -115,6 +178,274 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) return 0; } +static int +nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask, + uint32_t flags, bool hw_only) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + struct otx2_nix_tm_node *tm_node, *next_node; + struct otx2_mbox *mbox = dev->mbox; + struct nix_txsch_free_req *req; + uint32_t shaper_profile_id; + bool skip_node = false; + int rc = 0; + + next_node = TAILQ_FIRST(&dev->node_list); + while (next_node) { + tm_node = next_node; + next_node = TAILQ_NEXT(tm_node, node); + + /* Check for only requested nodes */ + if ((tm_node->flags & flags_mask) != flags) + continue; + + if (nix_tm_have_tl1_access(dev) && + tm_node->hw_lvl_id == NIX_TXSCH_LVL_TL1) + skip_node = true; + + otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)", + tm_node->id, tm_node->hw_lvl_id, + tm_node->hw_id, tm_node); + /* Free specific HW resource if requested */ + if (!skip_node && flags_mask && + tm_node->flags & NIX_TM_NODE_HWRES) { + req = otx2_mbox_alloc_msg_nix_txsch_free(mbox); + req->flags = 0; + req->schq_lvl = tm_node->hw_lvl_id; + req->schq = tm_node->hw_id; + rc = otx2_mbox_process(mbox); + if (rc) + break; + } else { + skip_node = false; + } + tm_node->flags &= ~NIX_TM_NODE_HWRES; + + /* Leave software elements if needed */ + if (hw_only) + continue; + + shaper_profile_id = tm_node->params.shaper_profile_id; + shaper_profile = + nix_tm_shaper_profile_search(dev, shaper_profile_id); + if (shaper_profile) + shaper_profile->reference_count--; + + TAILQ_REMOVE(&dev->node_list, tm_node, node); + rte_free(tm_node); + } + + if (!flags_mask) { + /* Free all hw resources */ + req = otx2_mbox_alloc_msg_nix_txsch_free(mbox); + req->flags = TXSCHQ_FREE_ALL; + + return otx2_mbox_process(mbox); + } + + return rc; +} + +static uint8_t +nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev, + struct nix_txsch_alloc_rsp *rsp) +{ + uint16_t schq; + uint8_t lvl; + + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) { + dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq]; + dev->txschq_contig_list[lvl][schq] = + rsp->schq_contig_list[lvl][schq]; + } + + dev->txschq[lvl] = rsp->schq[lvl]; + dev->txschq_contig[lvl] = rsp->schq_contig[lvl]; + } + return 0; +} + +static int +nix_tm_assign_id_to_node(struct otx2_eth_dev *dev, + struct otx2_nix_tm_node *child, + struct otx2_nix_tm_node *parent) +{ + uint32_t hw_id, schq_con_index, prio_offset; + uint32_t l_id, schq_index; + + otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)", + child->id, child->level_id, child->hw_lvl_id, child); + + child->flags |= NIX_TM_NODE_HWRES; + + /* Process root nodes */ + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 && + child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) { + int idx = 0; + uint32_t tschq_con_index; + + l_id = child->hw_lvl_id; + tschq_con_index = dev->txschq_contig_index[l_id]; + hw_id = dev->txschq_contig_list[l_id][tschq_con_index]; + child->hw_id = hw_id; + dev->txschq_contig_index[l_id]++; + /* Update TL1 hw_id for its parent for config purpose */ + idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++; + hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx]; + child->parent_hw_id = hw_id; + return 0; + } + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 && + child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) { + uint32_t tschq_con_index; + + l_id = child->hw_lvl_id; + tschq_con_index = dev->txschq_index[l_id]; + hw_id = dev->txschq_list[l_id][tschq_con_index]; + child->hw_id = hw_id; + dev->txschq_index[l_id]++; + return 0; + } + + /* Process children with parents */ + l_id = child->hw_lvl_id; + schq_index = dev->txschq_index[l_id]; + schq_con_index = dev->txschq_contig_index[l_id]; + + if (child->priority == parent->rr_prio) { + hw_id = dev->txschq_list[l_id][schq_index]; + child->hw_id = hw_id; + child->parent_hw_id = parent->hw_id; + dev->txschq_index[l_id]++; + } else { + prio_offset = schq_con_index + child->priority; + hw_id = dev->txschq_contig_list[l_id][prio_offset]; + child->hw_id = hw_id; + } + return 0; +} + +static int +nix_tm_assign_hw_id(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_node *parent, *child; + uint32_t child_hw_lvl, con_index_inc, i; + + for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) { + TAILQ_FOREACH(parent, &dev->node_list, node) { + child_hw_lvl = parent->hw_lvl_id - 1; + if (parent->hw_lvl_id != i) + continue; + TAILQ_FOREACH(child, &dev->node_list, node) { + if (!child->parent) + continue; + if (child->parent->id != parent->id) + continue; + nix_tm_assign_id_to_node(dev, child, parent); + } + + con_index_inc = parent->max_prio + 1; + dev->txschq_contig_index[child_hw_lvl] += con_index_inc; + + /* + * Explicitly assign id to parent node if it + * doesn't have a parent + */ + if (parent->hw_lvl_id == dev->otx2_tm_root_lvl) + nix_tm_assign_id_to_node(dev, parent, NULL); + } + } + return 0; +} + +static uint8_t +nix_tm_count_req_schq(struct otx2_eth_dev *dev, + struct nix_txsch_alloc_req *req, uint8_t lvl) +{ + struct otx2_nix_tm_node *tm_node; + uint8_t contig_count; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (lvl == tm_node->hw_lvl_id) { + req->schq[lvl - 1] += tm_node->rr_num; + if (tm_node->max_prio != UINT32_MAX) { + contig_count = tm_node->max_prio + 1; + req->schq_contig[lvl - 1] += contig_count; + } + } + if (lvl == dev->otx2_tm_root_lvl && + dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 && + tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) { + req->schq_contig[dev->otx2_tm_root_lvl]++; + } + } + + req->schq[NIX_TXSCH_LVL_TL1] = 1; + req->schq_contig[NIX_TXSCH_LVL_TL1] = 0; + + return 0; +} + +static int +nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev, + struct nix_txsch_alloc_req *req) +{ + uint8_t i; + + for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) + nix_tm_count_req_schq(dev, req, i); + + for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) { + dev->txschq_index[i] = 0; + dev->txschq_contig_index[i] = 0; + } + return 0; +} + +static int +nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_txsch_alloc_req *req; + struct nix_txsch_alloc_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox); + + rc = nix_tm_prepare_txschq_req(dev, req); + if (rc) + return rc; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix_tm_copy_rsp_to_dev(dev, rsp); + + nix_tm_assign_hw_id(dev); + return 0; +} + +static int +nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + RTE_SET_USED(xmit_enable); + + nix_tm_update_parent_info(dev); + + rc = nix_tm_send_txsch_alloc_msg(dev); + if (rc) { + otx2_err("TM failed to alloc tm resources=%d", rc); + return rc; + } + + return 0; +} + static int nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev) { @@ -226,6 +557,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) uint16_t sq_cnt = eth_dev->data->nb_tx_queues; int rc; + /* Free up all resources already held */ + rc = nix_tm_free_resources(dev, 0, 0, false); + if (rc) { + otx2_err("Failed to freeup existing resources,rc=%d", rc); + return rc; + } + /* Clear shaper profiles */ nix_tm_clear_shaper_profiles(dev); dev->tm_flags = NIX_TM_DEFAULT_TREE; @@ -234,6 +572,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) if (rc != 0) return rc; + rc = nix_tm_alloc_resources(eth_dev, false); + if (rc != 0) + return rc; dev->tm_leaf_cnt = sq_cnt; return 0; @@ -243,6 +584,15 @@ int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + /* Xmit is assumed to be disabled */ + /* Free up resources already held */ + rc = nix_tm_free_resources(dev, 0, 0, false); + if (rc) { + otx2_err("Failed to freeup existing resources,rc=%d", rc); + return rc; + } /* Clear shaper profiles */ nix_tm_clear_shaper_profiles(dev); From patchwork Sun Jun 30 18:05:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55699 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DDA701BA8E; Sun, 30 Jun 2019 20:08:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4A0471B9EA for ; Sun, 30 Jun 2019 20:07:48 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OG028454 for ; Sun, 30 Jun 2019 11:07:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=LPBjHU6Nh85Qvtx1FvS6PZWPI/dZZ0LoVR42Cy4KcXk=; b=iNloQYXmgqfH0qB7nl63WPbbsjSMi7lJQtIVhiYC0PnQOO2hxDIC9FKo1+qN1Ie/SYjx gCW/aSs5rCxTval02D4nxIXvOVe0vTjme6/MwpFYDWha+x4U09r+gsCqtUyFh808EmAM AHqM2PjD4+23rhYhEV2POBe4oq1joDOBp/Y30WC7XiHYydc/hCvHfeS0jvc1tFOWfL1K r4SqX1jqMnCtr1i29u2D1Y51ZzeTmir9sEp3DX2deLm2RPrKIDukJMKqQzZu1dJIM0bB 5u+UyQSVp/K6fwjXBacfLnjyqSym3kLvsv+iUAcvnctWhLd7ERsO2hKdW6r42U7IFe48 WQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4ghs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:47 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:43 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:43 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 36FF93F703F; Sun, 30 Jun 2019 11:07:41 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Krzysztof Kanas Date: Sun, 30 Jun 2019 23:35:35 +0530 Message-ID: <20190630180609.36705-24-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 23/57] net/octeontx2: configure TM HW resources X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch sets up and configure hierarchy in hw nodes. Since all the registers are with RVU AF, register configuration is also done using mbox communication. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/net/octeontx2/otx2_tm.c | 504 ++++++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 82 ++++++ 2 files changed, 586 insertions(+) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index 91f31df05..c6154e4d4 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -20,6 +20,41 @@ enum otx2_tm_node_level { OTX2_TM_LVL_MAX, }; +static inline +uint64_t shaper2regval(struct shaper_params *shaper) +{ + return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) | + (shaper->div_exp << 13) | (shaper->exponent << 9) | + (shaper->mantissa << 1); +} + +static int +nix_get_link(struct otx2_eth_dev *dev) +{ + int link = 13 /* SDP */; + uint16_t lmac_chan; + uint16_t map; + + lmac_chan = dev->tx_chan_base; + + /* CGX lmac link */ + if (lmac_chan >= 0x800) { + map = lmac_chan & 0x7FF; + link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF); + } else if (lmac_chan < 0x700) { + /* LBK channel */ + link = 12; + } + + return link; +} + +static uint8_t +nix_get_relchan(struct otx2_eth_dev *dev) +{ + return dev->tx_chan_base & 0xff; +} + static bool nix_tm_have_tl1_access(struct otx2_eth_dev *dev) { @@ -28,6 +63,24 @@ nix_tm_have_tl1_access(struct otx2_eth_dev *dev) !is_lbk && !dev->maxvf; } +static int +find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id) +{ + struct otx2_nix_tm_node *child_node; + + TAILQ_FOREACH(child_node, &dev->node_list, node) { + if (!child_node->parent) + continue; + if (!(child_node->parent->id == node_id)) + continue; + if (child_node->priority == child_node->parent->rr_prio) + continue; + return child_node->hw_id - child_node->priority; + } + return 0; +} + + static struct otx2_nix_tm_shaper_profile * nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) { @@ -40,6 +93,451 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) return NULL; } +static inline uint64_t +shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks, + uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p, uint64_t *div_exp_p) +{ + uint64_t div_exp, exponent, mantissa; + + /* Boundary checks */ + if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) || + value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks)) + return 0; + + if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) { + /* Calculate rate div_exp and mantissa using + * the following formula: + * + * value = (cclk_hz * (256 + mantissa) + * / ((cclk_ticks << div_exp) * 256) + */ + div_exp = 0; + exponent = 0; + mantissa = MAX_RATE_MANTISSA; + + while (value < (cclk_hz / (cclk_ticks << div_exp))) + div_exp += 1; + + while (value < + ((cclk_hz * (256 + mantissa)) / + ((cclk_ticks << div_exp) * 256))) + mantissa -= 1; + } else { + /* Calculate rate exponent and mantissa using + * the following formula: + * + * value = (cclk_hz * ((256 + mantissa) << exponent) + * / (cclk_ticks * 256) + * + */ + div_exp = 0; + exponent = MAX_RATE_EXPONENT; + mantissa = MAX_RATE_MANTISSA; + + while (value < (cclk_hz * (1 << exponent)) / cclk_ticks) + exponent -= 1; + + while (value < (cclk_hz * ((256 + mantissa) << exponent)) / + (cclk_ticks * 256)) + mantissa -= 1; + } + + if (div_exp > MAX_RATE_DIV_EXP || + exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA) + return 0; + + if (div_exp_p) + *div_exp_p = div_exp; + if (exponent_p) + *exponent_p = exponent; + if (mantissa_p) + *mantissa_p = mantissa; + + /* Calculate real rate value */ + return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp); +} + +static inline uint64_t +lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl, + uint64_t value, uint64_t *exponent, + uint64_t *mantissa, uint64_t *div_exp) +{ + if (hw_lvl == NIX_TXSCH_LVL_TL1) + return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, + value, exponent, mantissa, div_exp); + else + return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, + value, exponent, mantissa, div_exp); +} + +static inline uint64_t +shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p) +{ + uint64_t exponent, mantissa; + + if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST) + return 0; + + /* Calculate burst exponent and mantissa using + * the following formula: + * + * value = (((256 + mantissa) << (exponent + 1) + / 256) + * + */ + exponent = MAX_BURST_EXPONENT; + mantissa = MAX_BURST_MANTISSA; + + while (value < (1ull << (exponent + 1))) + exponent -= 1; + + while (value < ((256 + mantissa) << (exponent + 1)) / 256) + mantissa -= 1; + + if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA) + return 0; + + if (exponent_p) + *exponent_p = exponent; + if (mantissa_p) + *mantissa_p = mantissa; + + return SHAPER_BURST(exponent, mantissa); +} + +static int +configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev, + struct otx2_nix_tm_node *tm_node, + struct shaper_params *cir, + struct shaper_params *pir) +{ + uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE; + struct otx2_nix_tm_shaper_profile *shaper_profile = NULL; + struct rte_tm_shaper_params *param; + + shaper_profile_id = tm_node->params.shaper_profile_id; + + shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id); + if (shaper_profile) { + param = &shaper_profile->profile; + /* Calculate CIR exponent and mantissa */ + if (param->committed.rate) + cir->rate = lx_shaper_rate_to_nix(CCLK_HZ, + tm_node->hw_lvl_id, + param->committed.rate, + &cir->exponent, + &cir->mantissa, + &cir->div_exp); + + /* Calculate PIR exponent and mantissa */ + if (param->peak.rate) + pir->rate = lx_shaper_rate_to_nix(CCLK_HZ, + tm_node->hw_lvl_id, + param->peak.rate, + &pir->exponent, + &pir->mantissa, + &pir->div_exp); + + /* Calculate CIR burst exponent and mantissa */ + if (param->committed.size) + cir->burst = shaper_burst_to_nix(param->committed.size, + &cir->burst_exponent, + &cir->burst_mantissa); + + /* Calculate PIR burst exponent and mantissa */ + if (param->peak.size) + pir->burst = shaper_burst_to_nix(param->peak.size, + &pir->burst_exponent, + &pir->burst_mantissa); + } + + return 0; +} + +static int +send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req) +{ + int rc; + + if (req->num_regs > MAX_REGS_PER_MBOX_MSG) + return -ERANGE; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + req->num_regs = 0; + return 0; +} + +static int +populate_tm_registers(struct otx2_eth_dev *dev, + struct otx2_nix_tm_node *tm_node) +{ + uint64_t strict_schedul_prio, rr_prio; + struct otx2_mbox *mbox = dev->mbox; + volatile uint64_t *reg, *regval; + uint64_t parent = 0, child = 0; + struct shaper_params cir, pir; + struct nix_txschq_config *req; + uint64_t rr_quantum; + uint32_t hw_lvl; + uint32_t schq; + int rc; + + memset(&cir, 0, sizeof(cir)); + memset(&pir, 0, sizeof(pir)); + + /* Skip leaf nodes */ + if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT) + return 0; + + /* Root node will not have a parent node */ + if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) + parent = tm_node->parent_hw_id; + else + parent = tm_node->parent->hw_id; + + /* Do we need this trigger to configure TL1 */ + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 && + tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) { + schq = parent; + /* + * Default config for TL1. + * For VF this is always ignored. + */ + + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_TL1; + + /* Set DWRR quantum */ + req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq); + req->regval[0] = TXSCH_TL1_DFLT_RR_QTM; + req->num_regs++; + + req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq); + req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1); + req->num_regs++; + + req->reg[2] = NIX_AF_TL1X_CIR(schq); + req->regval[2] = 0; + req->num_regs++; + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + } + + if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ) + child = find_prio_anchor(dev, tm_node->id); + + rr_prio = tm_node->rr_prio; + hw_lvl = tm_node->hw_lvl_id; + strict_schedul_prio = tm_node->priority; + schq = tm_node->hw_id; + rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) / + MAX_SCHED_WEIGHT; + + configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir); + + otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u," + "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64, + tm_node, tm_node->level_id, hw_lvl, + tm_node->id, schq, parent, pir.rate, cir.rate); + + rc = -EFAULT; + + switch (hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + reg = req->reg; + regval = req->regval; + req->num_regs = 0; + + /* Set xoff which will be cleared later */ + *reg++ = NIX_AF_SMQX_CFG(schq); + *regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) | + (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; + req->num_regs++; + *reg++ = NIX_AF_MDQX_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_MDQX_SCHEDULE(schq); + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_MDQX_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_MDQX_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL4: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL4X_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_TL4X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1); + req->num_regs++; + *reg++ = NIX_AF_TL4X_SCHEDULE(schq); + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_TL4X_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL4X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL3: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL3X_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_TL3X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1); + req->num_regs++; + *reg++ = NIX_AF_TL3X_SCHEDULE(schq); + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_TL3X_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL3X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL2: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL2X_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_TL2X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1); + req->num_regs++; + *reg++ = NIX_AF_TL2X_SCHEDULE(schq); + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2) + *regval++ = (1 << 24) | rr_quantum; + else + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + *reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq, nix_get_link(dev)); + *regval++ = BIT_ULL(12) | nix_get_relchan(dev); + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_TL2X_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL2X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL1: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL1X_SCHEDULE(schq); + *regval++ = rr_quantum; + req->num_regs++; + *reg++ = NIX_AF_TL1X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/); + req->num_regs++; + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL1X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + } + + return 0; +error: + otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc); + return rc; +} + + +static int +nix_tm_txsch_reg_config(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_node *tm_node; + uint32_t lvl; + int rc = 0; + + if (nix_get_link(dev) == 13) + return -EPERM; + + for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) { + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->hw_lvl_id == lvl) { + rc = populate_tm_registers(dev, tm_node); + if (rc) + goto exit; + } + } + } +exit: + return rc; +} + static struct otx2_nix_tm_node * nix_tm_node_search(struct otx2_eth_dev *dev, uint32_t node_id, bool user) @@ -443,6 +941,12 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) return rc; } + rc = nix_tm_txsch_reg_config(dev); + if (rc) { + otx2_err("TM failed to configure sched registers=%d", rc); + return rc; + } + return 0; } diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 94023fa99..af1bb1862 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -64,4 +64,86 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); /* = NIX_MAX_HW_MTU */ #define DEFAULT_RR_WEIGHT 71 +/** NIX rate limits */ +#define MAX_RATE_DIV_EXP 12 +#define MAX_RATE_EXPONENT 0xf +#define MAX_RATE_MANTISSA 0xff + +/** NIX rate limiter time-wheel resolution */ +#define L1_TIME_WHEEL_CCLK_TICKS 240 +#define LX_TIME_WHEEL_CCLK_TICKS 860 + +#define CCLK_HZ 1000000000 + +/* NIX rate calculation + * CCLK = coprocessor-clock frequency in MHz + * CCLK_TICKS = rate limiter time-wheel resolution + * + * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA]) + * << NIX_*_PIR[RATE_EXPONENT]) / 256 + * PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT])) + * * PIR_ADD + * + * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA]) + * << NIX_*_CIR[RATE_EXPONENT]) / 256 + * CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT])) + * * CIR_ADD + */ +#define SHAPER_RATE(cclk_hz, cclk_ticks, \ + exponent, mantissa, div_exp) \ + (((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \ + / (((cclk_ticks) << (div_exp)) * 256)) + +#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \ + SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \ + exponent, mantissa, div_exp) + +#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \ + SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \ + exponent, mantissa, div_exp) + +/* Shaper rate limits */ +#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \ + SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP) + +#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \ + SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \ + MAX_RATE_MANTISSA, 0) + +#define MIN_L1_SHAPER_RATE(cclk_hz) \ + MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS) + +#define MAX_L1_SHAPER_RATE(cclk_hz) \ + MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS) + +/** TM Shaper - low level operations */ + +/** NIX burst limits */ +#define MAX_BURST_EXPONENT 0xf +#define MAX_BURST_MANTISSA 0xff + +/* NIX burst calculation + * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA]) + * << (NIX_*_PIR[BURST_EXPONENT] + 1)) + * / 256 + * + * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA]) + * << (NIX_*_CIR[BURST_EXPONENT] + 1)) + * / 256 + */ +#define SHAPER_BURST(exponent, mantissa) \ + (((256 + (mantissa)) << ((exponent) + 1)) / 256) + +/** Shaper burst limits */ +#define MIN_SHAPER_BURST \ + SHAPER_BURST(0, 0) + +#define MAX_SHAPER_BURST \ + SHAPER_BURST(MAX_BURST_EXPONENT,\ + MAX_BURST_MANTISSA) + +/* Default TL1 priority and Quantum from AF */ +#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1) +#define TXSCH_TL1_DFLT_RR_PRIO 1 + #endif /* __OTX2_TM_H__ */ From patchwork Sun Jun 30 18:05:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55700 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AF3F51BA9A; Sun, 30 Jun 2019 20:08:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 0B3621B965 for ; Sun, 30 Jun 2019 20:07:48 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OH028454 for ; Sun, 30 Jun 2019 11:07:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=TdoGfIVJlz6ICFHYT72X/PH4rFpLvgwIzTV2dNaQKPY=; b=mqTPbXd0TTa9gJaVOwNDG8yVZBN1R86ml6L5RpZOLhmHEYp4rSa9M9yovghRZp9ZOJB7 6qopEnVvjycvxxyTDtWcAniQcX54UUuxpw2EOHfl7/oKGIwybz8y71qII4mtd9I+R+re tlTpbv9Hw9++5rq2P8Dz+pNiRE95OuZVbTTza24B/3mDJoICAQzryPsMsO6VdZBUFbrB OPJzhm0Y3qBXxZbIUwwb7LYo+nRoxygOkgxFI/GfTVJZNgDNXOeHgAD3Yg4fQMwGCdOv rxCyAufOb/LlnJMutiWNfmEMD/IRGEIMEqV4cS0DITgbrasW7quVUshRIaAdYZmObCQ8 Ww== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4ghw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:48 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:46 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 1A3313F703F; Sun, 30 Jun 2019 11:07:44 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Krzysztof Kanas , Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:36 +0530 Message-ID: <20190630180609.36705-25-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 24/57] net/octeontx2: enable Tx through traffic manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Krzysztof Kanas This patch enables pkt transmit through traffic manager hierarchy by clearing software XOFF on the nodes and linking tx queues to corresponding leaf nodes. It also adds support to start and stop tx queue using traffic manager. Signed-off-by: Krzysztof Kanas Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/otx2_ethdev.c | 75 ++++++- drivers/net/octeontx2/otx2_tm.c | 296 +++++++++++++++++++++++++++- drivers/net/octeontx2/otx2_tm.h | 4 + 3 files changed, 370 insertions(+), 5 deletions(-) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 899865749..62b1e3d14 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -120,6 +120,32 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +int +otx2_cgx_rxtx_start(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_start_rxtx(mbox); + + return otx2_mbox_process(mbox); +} + +int +otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox); + + return otx2_mbox_process(mbox); +} + static inline void nix_rx_queue_reset(struct otx2_eth_rxq *rxq) { @@ -461,16 +487,27 @@ nix_sq_init(struct otx2_eth_txq *txq) struct otx2_eth_dev *dev = txq->dev; struct otx2_mbox *mbox = dev->mbox; struct nix_aq_enq_req *sq; + uint32_t rr_quantum; + uint16_t smq; + int rc; if (txq->sqb_pool->pool_id == 0) return -EINVAL; + rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq); + if (rc) { + otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc); + return rc; + } + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); sq->qidx = txq->sq; sq->ctype = NIX_AQ_CTYPE_SQ; sq->op = NIX_AQ_INSTOP_INIT; sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq); + sq->sq.smq = smq; + sq->sq.smq_rr_quantum = rr_quantum; sq->sq.default_chan = dev->tx_chan_base; sq->sq.sqe_stype = NIX_STYPE_STF; sq->sq.ena = 1; @@ -692,12 +729,18 @@ static void otx2_nix_tx_queue_release(void *_txq) { struct otx2_eth_txq *txq = _txq; + struct rte_eth_dev *eth_dev; if (!txq) return; + eth_dev = txq->dev->eth_dev; + otx2_nix_dbg("Releasing txq %u", txq->sq); + /* Flush and disable tm */ + otx2_nix_tm_sw_xoff(txq, eth_dev->data->dev_started); + /* Free sqb's and disable sq */ nix_sq_uninit(txq); @@ -1123,24 +1166,52 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx) { struct rte_eth_dev_data *data = eth_dev->data; + struct otx2_eth_txq *txq; + int rc = -EINVAL; + + txq = eth_dev->data->tx_queues[qidx]; if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) return 0; + rc = otx2_nix_sq_sqb_aura_fc(txq, true); + if (rc) { + otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d", + qidx, rc); + goto done; + } + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; - return 0; + +done: + return rc; } int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) { struct rte_eth_dev_data *data = eth_dev->data; + struct otx2_eth_txq *txq; + int rc; + + txq = eth_dev->data->tx_queues[qidx]; if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) return 0; + txq->fc_cache_pkts = 0; + + rc = otx2_nix_sq_sqb_aura_fc(txq, false); + if (rc) { + otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d", + qidx, rc); + goto done; + } + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; - return 0; + +done: + return rc; } static int diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index c6154e4d4..246920695 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -676,6 +676,224 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) return 0; } +static int +nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_txschq_config *req; + + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_SMQ; + req->num_regs = 1; + + req->reg[0] = NIX_AF_SMQX_CFG(smq); + /* Unmodified fields */ + req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) | + (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; + + if (enable) + req->regval[0] |= BIT_ULL(50) | BIT_ULL(49); + else + req->regval[0] |= 0; + + return otx2_mbox_process(mbox); +} + +int +otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable) +{ + struct otx2_eth_txq *txq = __txq; + struct npa_aq_enq_req *req; + struct npa_aq_enq_rsp *rsp; + struct otx2_npa_lf *lf; + struct otx2_mbox *mbox; + uint64_t aura_handle; + int rc; + + lf = otx2_npa_lf_obj_get(); + if (!lf) + return -EFAULT; + mbox = lf->mbox; + /* Set/clear sqb aura fc_ena */ + aura_handle = txq->sqb_pool->pool_id; + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + + req->aura_id = npa_lf_aura_handle_to_aura(aura_handle); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_WRITE; + /* Below is not needed for aura writes but AF driver needs it */ + /* AF will translate to associated poolctx */ + req->aura.pool_addr = req->aura_id; + + req->aura.fc_ena = enable; + req->aura_mask.fc_ena = 1; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + /* Read back npa aura ctx */ + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + + req->aura_id = npa_lf_aura_handle_to_aura(aura_handle); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Init when enabled as there might be no triggers */ + if (enable) + *(volatile uint64_t *)txq->fc_mem = rsp->aura.count; + else + *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs; + /* Sync write barrier */ + rte_wmb(); + + return 0; +} + +static void +nix_txq_flush_sq_spin(struct otx2_eth_txq *txq) +{ + uint16_t sqb_cnt, head_off, tail_off; + struct otx2_eth_dev *dev = txq->dev; + uint16_t sq = txq->sq; + uint64_t reg, val; + int64_t *regaddr; + + while (true) { + reg = ((uint64_t)sq << 32); + regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS); + val = otx2_atomic64_add_nosync(reg, regaddr); + + regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS); + val = otx2_atomic64_add_nosync(reg, regaddr); + sqb_cnt = val & 0xFFFF; + head_off = (val >> 20) & 0x3F; + tail_off = (val >> 28) & 0x3F; + + /* SQ reached quiescent state */ + if (sqb_cnt <= 1 && head_off == tail_off && + (*txq->fc_mem == txq->nb_sqb_bufs)) { + break; + } + + rte_pause(); + } +} + +int +otx2_nix_tm_sw_xoff(void *__txq, bool dev_started) +{ + struct otx2_eth_txq *txq = __txq; + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *req; + struct nix_aq_enq_rsp *rsp; + uint16_t smq; + int rc; + + /* Get smq from sq */ + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + req->qidx = txq->sq; + req->ctype = NIX_AQ_CTYPE_SQ; + req->op = NIX_AQ_INSTOP_READ; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get smq, rc=%d", rc); + return -EIO; + } + + /* Check if sq is enabled */ + if (!rsp->sq.ena) + return 0; + + smq = rsp->sq.smq; + + /* Enable CGX RXTX to drain pkts */ + if (!dev_started) { + rc = otx2_cgx_rxtx_start(dev); + if (rc) + return rc; + } + + rc = otx2_nix_sq_sqb_aura_fc(txq, false); + if (rc < 0) { + otx2_err("Failed to disable sqb aura fc, rc=%d", rc); + goto cleanup; + } + + /* Disable smq xoff for case it was enabled earlier */ + rc = nix_smq_xoff(dev, smq, false); + if (rc) { + otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc); + goto cleanup; + } + + /* Wait for sq entries to be flushed */ + nix_txq_flush_sq_spin(txq); + + /* Flush and enable smq xoff */ + rc = nix_smq_xoff(dev, smq, true); + if (rc) { + otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc); + return rc; + } + +cleanup: + /* Restore cgx state */ + if (!dev_started) + rc |= otx2_cgx_rxtx_stop(dev); + + return rc; +} + +static int +nix_tm_sw_xon(struct otx2_eth_txq *txq, + uint16_t smq, uint32_t rr_quantum) +{ + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *req; + int rc; + + otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u", + txq->sq, txq->sq, rr_quantum); + /* Set smq from sq */ + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + req->qidx = txq->sq; + req->ctype = NIX_AQ_CTYPE_SQ; + req->op = NIX_AQ_INSTOP_WRITE; + req->sq.smq = smq; + req->sq.smq_rr_quantum = rr_quantum; + req->sq_mask.smq = ~req->sq_mask.smq; + req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to set smq, rc=%d", rc); + return -EIO; + } + + /* Enable sqb_aura fc */ + rc = otx2_nix_sq_sqb_aura_fc(txq, true); + if (rc < 0) { + otx2_err("Failed to enable sqb aura fc, rc=%d", rc); + return rc; + } + + /* Disable smq xoff */ + rc = nix_smq_xoff(dev, smq, false); + if (rc) { + otx2_err("Failed to enable smq for sq %u", txq->sq); + return rc; + } + + return 0; +} + static int nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask, uint32_t flags, bool hw_only) @@ -929,10 +1147,11 @@ static int nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_nix_tm_node *tm_node; + uint16_t sq, smq, rr_quantum; + struct otx2_eth_txq *txq; int rc; - RTE_SET_USED(xmit_enable); - nix_tm_update_parent_info(dev); rc = nix_tm_send_txsch_alloc_msg(dev); @@ -947,7 +1166,43 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) return rc; } - return 0; + /* Enable xmit as all the topology is ready */ + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->flags & NIX_TM_NODE_ENABLED) + continue; + + /* Enable xmit on sq */ + if (tm_node->level_id != OTX2_TM_LVL_QUEUE) { + tm_node->flags |= NIX_TM_NODE_ENABLED; + continue; + } + + /* Don't enable SMQ or mark as enable */ + if (!xmit_enable) + continue; + + sq = tm_node->id; + if (sq > eth_dev->data->nb_tx_queues) { + rc = -EFAULT; + break; + } + + txq = eth_dev->data->tx_queues[sq]; + + smq = tm_node->parent->hw_id; + rr_quantum = (tm_node->weight * + NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT; + + rc = nix_tm_sw_xon(txq, smq, rr_quantum); + if (rc) + break; + tm_node->flags |= NIX_TM_NODE_ENABLED; + } + + if (rc) + otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc); + + return rc; } static int @@ -1104,3 +1359,38 @@ otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) dev->tm_flags = 0; return 0; } + +int +otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq, + uint32_t *rr_quantum, uint16_t *smq) +{ + struct otx2_nix_tm_node *tm_node; + int rc; + + /* 0..sq_cnt-1 are leaf nodes */ + if (sq >= dev->tm_leaf_cnt) + return -EINVAL; + + /* Search for internal node first */ + tm_node = nix_tm_node_search(dev, sq, false); + if (!tm_node) + tm_node = nix_tm_node_search(dev, sq, true); + + /* Check if we found a valid leaf node */ + if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE || + !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) { + return -EIO; + } + + /* Get SMQ Id of leaf node's parent */ + *smq = tm_node->parent->hw_id; + *rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) + / MAX_SCHED_WEIGHT; + + rc = nix_smq_xoff(dev, *smq, false); + if (rc) + return rc; + tm_node->flags |= NIX_TM_NODE_ENABLED; + + return 0; +} diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index af1bb1862..2a009eece 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -16,6 +16,10 @@ struct otx2_eth_dev; void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq, + uint32_t *rr_quantum, uint16_t *smq); +int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started); +int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable); struct otx2_nix_tm_node { TAILQ_ENTRY(otx2_nix_tm_node) node; From patchwork Sun Jun 30 18:05:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55701 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 23E5C1BADD; Sun, 30 Jun 2019 20:08:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 966C61B9A7 for ; Sun, 30 Jun 2019 20:07:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI640u016301; Sun, 30 Jun 2019 11:07:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=bR43JTRKWBKeDzVjLGJ21JTc/AJa+mao73wCs4XZCNc=; b=BKmrQY40Ij94tVVz7OFJ251YsOARwODFZeSKFbZ8d/NpASeSBGRRWlk9sgPd/ktfN7ki uzQbVKplNFAlIJnkEAZ3OfZm8Vb1LKi4vUNLwx0TvH0sP6sq+KEsi0ocpbdi00jprDUC 3ekBv3JPkUCsv+WzXQzh9hFnQn3JrVIY1SZTv5t2Bo9rizuhCZLL+BUmfDgqWNWnIlWQ MfD+Jr8ptFlHtUe9cd8C+bADqCoyVP2ed8FOhC3cY/CYTv06AEOQEvv418XtWCnJnZfd EMl/b0ZGP1yBS3KQ2Vk0IshH/zLuTlDv3yac1znAo8VM159FHQoEK5Pj5rMNW7CsHtEB zg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yb6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:51 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:50 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:50 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2A2253F703F; Sun, 30 Jun 2019 11:07:47 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Harman Kalra Date: Sun, 30 Jun 2019 23:35:37 +0530 Message-ID: <20190630180609.36705-26-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 25/57] net/octeontx2: add ptype support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The fields from CQE needs to be converted to ptype and rx ol flags in mbuf. This patch adds create lookup memory for those items to be used in Fastpath. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 6 + drivers/net/octeontx2/otx2_lookup.c | 315 +++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 7 + 10 files changed, 336 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_lookup.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index ca40358da..0de07776f 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -20,6 +20,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Packet type parsing = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index b720c116f..b4b253aa4 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -20,6 +20,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Packet type parsing = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 5a287493f..21cc4861e 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -16,6 +16,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Packet type parsing = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index d7e8f3d56..07e44b031 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -16,6 +16,7 @@ Features Features of the OCTEON TX2 Ethdev PMD are: +- Packet type information - Promiscuous mode - SR-IOV VF - Lock-free Tx queue diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 164621087..d434b0b9d 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_link.c \ otx2_stats.c \ + otx2_lookup.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index e344d877f..3dff3e53d 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -8,6 +8,7 @@ sources = files( 'otx2_mac.c', 'otx2_link.c', 'otx2_stats.c', + 'otx2_lookup.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 62b1e3d14..a9cdafc33 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -441,6 +441,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, rxq->pool = mp; rxq->qlen = nix_qsize_to_val(qsize); rxq->qsize = qsize; + rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get(); /* Alloc completion queue */ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp); @@ -1271,6 +1272,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, .rx_queue_stop = otx2_nix_rx_queue_stop, + .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 9f73bf89b..cfc4dfe14 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -355,6 +355,12 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +/* Lookup configuration */ +void *otx2_nix_fastpath_lookup_mem_get(void); + +/* PTYPES */ +const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev); + /* Mac address handling */ int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c new file mode 100644 index 000000000..99199d08a --- /dev/null +++ b/drivers/net/octeontx2/otx2_lookup.c @@ -0,0 +1,315 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" + +/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */ +#define ERRCODE_ERRLEN_WIDTH 12 +#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\ + sizeof(uint32_t)) + +#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ) + +const uint32_t * +otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + static const uint32_t ptypes[] = { + RTE_PTYPE_L2_ETHER_QINQ, /* LB */ + RTE_PTYPE_L2_ETHER_VLAN, /* LB */ + RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */ + RTE_PTYPE_L2_ETHER_ARP, /* LC */ + RTE_PTYPE_L2_ETHER_NSH, /* LC */ + RTE_PTYPE_L2_ETHER_FCOE, /* LC */ + RTE_PTYPE_L2_ETHER_MPLS, /* LC */ + RTE_PTYPE_L3_IPV4, /* LC */ + RTE_PTYPE_L3_IPV4_EXT, /* LC */ + RTE_PTYPE_L3_IPV6, /* LC */ + RTE_PTYPE_L3_IPV6_EXT, /* LC */ + RTE_PTYPE_L4_TCP, /* LD */ + RTE_PTYPE_L4_UDP, /* LD */ + RTE_PTYPE_L4_SCTP, /* LD */ + RTE_PTYPE_L4_ICMP, /* LD */ + RTE_PTYPE_L4_IGMP, /* LD */ + RTE_PTYPE_TUNNEL_GRE, /* LD */ + RTE_PTYPE_TUNNEL_ESP, /* LD */ + RTE_PTYPE_TUNNEL_NVGRE, /* LD */ + RTE_PTYPE_TUNNEL_VXLAN, /* LE */ + RTE_PTYPE_TUNNEL_GENEVE, /* LE */ + RTE_PTYPE_TUNNEL_GTPC, /* LE */ + RTE_PTYPE_TUNNEL_GTPU, /* LE */ + RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */ + RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */ + RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */ + RTE_PTYPE_INNER_L2_ETHER,/* LF */ + RTE_PTYPE_INNER_L3_IPV4, /* LG */ + RTE_PTYPE_INNER_L3_IPV6, /* LG */ + RTE_PTYPE_INNER_L4_TCP, /* LH */ + RTE_PTYPE_INNER_L4_UDP, /* LH */ + RTE_PTYPE_INNER_L4_SCTP, /* LH */ + RTE_PTYPE_INNER_L4_ICMP, /* LH */ + }; + + if (dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F) + return ptypes; + else + return NULL; +} + +/* + * +------------------ +------------------ + + * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 | + * +-------------------+-------------------+ + * + * +-------------------+------------------ + + * | | LH | LG | LF | LE | LD | LC | LB | + * +-------------------+-------------------+ + * + * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2 + * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU + * + */ +static void +nix_create_non_tunnel_ptype_array(uint16_t *ptype) +{ + uint8_t lb, lc, ld, le; + uint16_t idx, val; + + for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) { + lb = idx & 0xF; + lc = (idx & 0xF0) >> 4; + ld = (idx & 0xF00) >> 8; + le = (idx & 0xF000) >> 12; + val = RTE_PTYPE_UNKNOWN; + + switch (lb) { + case NPC_LT_LB_QINQ: + val |= RTE_PTYPE_L2_ETHER_QINQ; + break; + case NPC_LT_LB_CTAG: + val |= RTE_PTYPE_L2_ETHER_VLAN; + break; + } + + switch (lc) { + case NPC_LT_LC_ARP: + val |= RTE_PTYPE_L2_ETHER_ARP; + break; + case NPC_LT_LC_NSH: + val |= RTE_PTYPE_L2_ETHER_NSH; + break; + case NPC_LT_LC_FCOE: + val |= RTE_PTYPE_L2_ETHER_FCOE; + break; + case NPC_LT_LC_MPLS: + val |= RTE_PTYPE_L2_ETHER_MPLS; + break; + case NPC_LT_LC_IP: + val |= RTE_PTYPE_L3_IPV4; + break; + case NPC_LT_LC_IP_OPT: + val |= RTE_PTYPE_L3_IPV4_EXT; + break; + case NPC_LT_LC_IP6: + val |= RTE_PTYPE_L3_IPV6; + break; + case NPC_LT_LC_IP6_EXT: + val |= RTE_PTYPE_L3_IPV6_EXT; + break; + case NPC_LT_LC_PTP: + val |= RTE_PTYPE_L2_ETHER_TIMESYNC; + break; + } + + switch (ld) { + case NPC_LT_LD_TCP: + val |= RTE_PTYPE_L4_TCP; + break; + case NPC_LT_LD_UDP: + val |= RTE_PTYPE_L4_UDP; + break; + case NPC_LT_LD_SCTP: + val |= RTE_PTYPE_L4_SCTP; + break; + case NPC_LT_LD_ICMP: + val |= RTE_PTYPE_L4_ICMP; + break; + case NPC_LT_LD_IGMP: + val |= RTE_PTYPE_L4_IGMP; + break; + case NPC_LT_LD_GRE: + val |= RTE_PTYPE_TUNNEL_GRE; + break; + case NPC_LT_LD_NVGRE: + val |= RTE_PTYPE_TUNNEL_NVGRE; + break; + case NPC_LT_LD_ESP: + val |= RTE_PTYPE_TUNNEL_ESP; + break; + } + + switch (le) { + case NPC_LT_LE_VXLAN: + val |= RTE_PTYPE_TUNNEL_VXLAN; + break; + case NPC_LT_LE_VXLANGPE: + val |= RTE_PTYPE_TUNNEL_VXLAN_GPE; + break; + case NPC_LT_LE_GENEVE: + val |= RTE_PTYPE_TUNNEL_GENEVE; + break; + case NPC_LT_LE_GTPC: + val |= RTE_PTYPE_TUNNEL_GTPC; + break; + case NPC_LT_LE_GTPU: + val |= RTE_PTYPE_TUNNEL_GTPU; + break; + case NPC_LT_LE_TU_MPLS_IN_GRE: + val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE; + break; + case NPC_LT_LE_TU_MPLS_IN_UDP: + val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP; + break; + } + ptype[idx] = val; + } +} + +#define TU_SHIFT(x) ((x) >> PTYPE_WIDTH) +static void +nix_create_tunnel_ptype_array(uint16_t *ptype) +{ + uint8_t le, lf, lg; + uint16_t idx, val; + + /* Skip non tunnel ptype array memory */ + ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ; + + for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) { + le = idx & 0xF; + lf = (idx & 0xF0) >> 4; + lg = (idx & 0xF00) >> 8; + val = RTE_PTYPE_UNKNOWN; + + switch (le) { + case NPC_LT_LF_TU_ETHER: + val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER); + break; + } + switch (lf) { + case NPC_LT_LG_TU_IP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4); + break; + case NPC_LT_LG_TU_IP6: + val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6); + break; + } + switch (lg) { + case NPC_LT_LH_TU_TCP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP); + break; + case NPC_LT_LH_TU_UDP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP); + break; + case NPC_LT_LH_TU_SCTP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP); + break; + case NPC_LT_LH_TU_ICMP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP); + break; + } + + ptype[idx] = val; + } +} + +static void +nix_create_rx_ol_flags_array(void *mem) +{ + uint16_t idx, errcode, errlev; + uint32_t val, *ol_flags; + + /* Skip ptype array memory */ + ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ); + + for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) { + errlev = idx & 0xf; + errcode = (idx & 0xff0) >> 4; + + val = PKT_RX_IP_CKSUM_UNKNOWN; + val |= PKT_RX_L4_CKSUM_UNKNOWN; + val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN; + + switch (errlev) { + case NPC_ERRLEV_RE: + /* Mark all errors as BAD checksum errors */ + if (errcode) { + val |= PKT_RX_IP_CKSUM_BAD; + val |= PKT_RX_L4_CKSUM_BAD; + } else { + val |= PKT_RX_IP_CKSUM_GOOD; + val |= PKT_RX_L4_CKSUM_GOOD; + } + break; + case NPC_ERRLEV_LC: + if (errcode == NPC_EC_OIP4_CSUM || + errcode == NPC_EC_IP_FRAG_OFFSET_1) { + val |= PKT_RX_IP_CKSUM_BAD; + val |= PKT_RX_EIP_CKSUM_BAD; + } else { + val |= PKT_RX_IP_CKSUM_GOOD; + } + break; + case NPC_ERRLEV_LG: + if (errcode == NPC_EC_IIP4_CSUM) + val |= PKT_RX_IP_CKSUM_BAD; + else + val |= PKT_RX_IP_CKSUM_GOOD; + break; + case NPC_ERRLEV_NIX: + if (errcode == NIX_RX_PERRCODE_OL4_CHK) { + val |= PKT_RX_OUTER_L4_CKSUM_BAD; + val |= PKT_RX_L4_CKSUM_BAD; + } else if (errcode == NIX_RX_PERRCODE_IL4_CHK) { + val |= PKT_RX_L4_CKSUM_BAD; + } else { + val |= PKT_RX_IP_CKSUM_GOOD; + val |= PKT_RX_L4_CKSUM_GOOD; + } + break; + } + + ol_flags[idx] = val; + } +} + +void * +otx2_nix_fastpath_lookup_mem_get(void) +{ + const char name[] = "otx2_nix_fastpath_lookup_mem"; + const struct rte_memzone *mz; + void *mem; + + mz = rte_memzone_lookup(name); + if (mz != NULL) + return mz->addr; + + /* Request for the first time */ + mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ, + SOCKET_ID_ANY, 0, OTX2_ALIGN); + if (mz != NULL) { + mem = mz->addr; + /* Form the ptype array lookup memory */ + nix_create_non_tunnel_ptype_array(mem); + nix_create_tunnel_ptype_array(mem); + /* Form the rx ol_flags based on errcode */ + nix_create_rx_ol_flags_array(mem); + return mem; + } + return NULL; +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 1749c43ff..1283fdf37 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -5,6 +5,13 @@ #ifndef __OTX2_RX_H__ #define __OTX2_RX_H__ +#define PTYPE_WIDTH 12 +#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) +#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) +#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\ + PTYPE_TUNNEL_ARRAY_SZ) *\ + sizeof(uint16_t)) + #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) #endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 30 18:05:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55702 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A60671BACA; Sun, 30 Jun 2019 20:08:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 7C9851B994 for ; Sun, 30 Jun 2019 20:07:54 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OI028454 for ; Sun, 30 Jun 2019 11:07:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=6jSEPwhaXj/Nb80lHDw6wPAqdUUdJyNfdlXVtO581Pw=; b=eN/s0fFY2+UMhP9BRj8uwl2WQvGlgeCnXX/29Srxib4iaHVxRTqAZqNJFfb+VRQdNbgq mpOQUrezcR+thFzKa6pBJQZHr0QwoF8qlME14zEK955lxgu8jiElq84lqkELB5bD4W3f RDkCuPvAK+dDb2ISvdlbYRWc+M8J54RoyNVPheq4vuUrYP1nBKLwq/BAmizud6Crf5Zu UMLZZ5EIWuR89pbBT/4thJJilawE8V+ua2nScmLx0CzBoyX3EsULfQ/GYBvHxrn2i7K9 m0FMtHSu25iWSaao2BpODwbJMUyj1ekA9alENoZMLinqt0RcpY9DpV3xPv2FCKK4wki0 3A== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gj7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:07:53 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:52 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 5066C3F703F; Sun, 30 Jun 2019 11:07:51 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K Date: Sun, 30 Jun 2019 23:35:38 +0530 Message-ID: <20190630180609.36705-27-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 26/57] net/octeontx2: add queue info and pool supported operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add Rx and Tx queue info get and pool ops supported operations. Signed-off-by: Nithin Dabilpuram Signed-off-by: Kiran Kumar K --- drivers/net/octeontx2/otx2_ethdev.c | 3 ++ drivers/net/octeontx2/otx2_ethdev.h | 5 +++ drivers/net/octeontx2/otx2_ethdev_ops.c | 51 +++++++++++++++++++++++++ 3 files changed, 59 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index a9cdafc33..a504870f6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1293,6 +1293,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .xstats_reset = otx2_nix_xstats_reset, .xstats_get_by_id = otx2_nix_xstats_get_by_id, .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, + .rxq_info_get = otx2_nix_rxq_info_get, + .txq_info_get = otx2_nix_txq_info_get, + .pool_ops_supported = otx2_nix_pool_ops_supported, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index cfc4dfe14..199d5f242 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -274,6 +274,11 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) /* Ops */ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool); +void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo); +void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo); void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en); void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 1c935b627..eda5f8a01 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include + #include "otx2_ethdev.h" static void @@ -86,6 +88,55 @@ otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev) nix_allmulticast_config(eth_dev, 0); } +void +otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo) +{ + struct otx2_eth_rxq *rxq; + + rxq = eth_dev->data->rx_queues[queue_id]; + + qinfo->mp = rxq->pool; + qinfo->scattered_rx = eth_dev->data->scattered_rx; + qinfo->nb_desc = rxq->qconf.nb_desc; + + qinfo->conf.rx_free_thresh = 0; + qinfo->conf.rx_drop_en = 0; + qinfo->conf.rx_deferred_start = 0; + qinfo->conf.offloads = rxq->offloads; +} + +void +otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct otx2_eth_txq *txq; + + txq = eth_dev->data->tx_queues[queue_id]; + + qinfo->nb_desc = txq->qconf.nb_desc; + + qinfo->conf.tx_thresh.pthresh = 0; + qinfo->conf.tx_thresh.hthresh = 0; + qinfo->conf.tx_thresh.wthresh = 0; + + qinfo->conf.tx_free_thresh = 0; + qinfo->conf.tx_rs_thresh = 0; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = 0; +} + +int +otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) +{ + RTE_SET_USED(eth_dev); + + if (!strcmp(pool, rte_mbuf_platform_mempool_ops())) + return 0; + + return -ENOTSUP; +} + void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) { From patchwork Sun Jun 30 18:05:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55703 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5820B1BAD7; Sun, 30 Jun 2019 20:08:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 689011B9A7 for ; Sun, 30 Jun 2019 20:07:58 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI640v016301; Sun, 30 Jun 2019 11:07:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=fmJJoBfcaUmSsa70AktvzENzL17le7jdOwlOL8PneDo=; b=RJXzA1YfDHqxBvywLPrIcxrRorGRtRRKbxt6AvTRLmiz1zgMzJOzHjPiQGrs4Qf1eZW9 wxZMQUvHLewfjCDjRsITDA8/p/flkwst1L03xtVLY9FTrdPkoRNIKEZekJ05hTMU1NcJ 56EtyTWARc9IrUAICig3VINiNdCtI8SOfjhbWNTJrqxq0CO+QDL+YOLnmQ6oaxjtDwbu vp10SsHoMCU4eePI8l2DWf/YCWC3V/ASTkJc/BAvGeAYQpF2sRVTGTpCCZjeROek6KxE DvX5uXpeRQqkwkzhJy7GWZRTIiJ/tJfrGSn0x3zppuKDJop7rrYSS807k3VEaTe3dgre Xw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3ybg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:07:57 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:55 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:55 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id E32B23F703F; Sun, 30 Jun 2019 11:07:53 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K Date: Sun, 30 Jun 2019 23:35:39 +0530 Message-ID: <20190630180609.36705-28-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 27/57] net/octeontx2: add Rx and Tx descriptor operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Rx and Tx queue descriptor related operations. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + drivers/net/octeontx2/otx2_ethdev.c | 4 ++ drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 83 ++++++++++++++++++++++ 6 files changed, 97 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 0de07776f..f07b64f24 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y @@ -21,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Packet type parsing = Y +Rx descriptor status = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index b4b253aa4..911c926e4 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y @@ -21,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Packet type parsing = Y +Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 21cc4861e..e275e6469 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,12 +11,14 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Free Tx mbuf on demand = Y Queue start/stop = Y RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y Packet type parsing = Y +Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index a504870f6..feeba5c96 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1295,6 +1295,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, .rxq_info_get = otx2_nix_rxq_info_get, .txq_info_get = otx2_nix_txq_info_get, + .rx_queue_count = otx2_nix_rx_queue_count, + .rx_descriptor_done = otx2_nix_rx_descriptor_done, + .rx_descriptor_status = otx2_nix_rx_descriptor_status, + .tx_done_cleanup = otx2_nix_tx_done_cleanup, .pool_ops_supported = otx2_nix_pool_ops_supported, }; diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 199d5f242..8f2691c80 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -279,6 +279,10 @@ void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx); +int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt); +int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset); +int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset); void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en); void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index eda5f8a01..44cc17200 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -126,6 +126,89 @@ otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, qinfo->conf.tx_deferred_start = 0; } +static void +nix_rx_head_tail_get(struct otx2_eth_dev *dev, + uint32_t *head, uint32_t *tail, uint16_t queue_idx) +{ + uint64_t reg, val; + + if (head == NULL || tail == NULL) + return; + + reg = (((uint64_t)queue_idx) << 32); + val = otx2_atomic64_add_nosync(reg, (int64_t *) + (dev->base + NIX_LF_CQ_OP_STATUS)); + if (val & (OP_ERR | CQ_ERR)) + val = 0; + + *tail = (uint32_t)(val & 0xFFFFF); + *head = (uint32_t)((val >> 20) & 0xFFFFF); +} + +uint32_t +otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx) +{ + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx]; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t head, tail; + + nix_rx_head_tail_get(dev, &head, &tail, queue_idx); + return (tail - head) % rxq->qlen; +} + +static inline int +nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset) +{ + /* Check given offset(queue index) has packet filled by HW */ + if (tail > head && offset <= tail && offset >= head) + return 1; + /* Wrap around case */ + if (head > tail && (offset >= head || offset <= tail)) + return 1; + + return 0; +} + +int +otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset) +{ + struct otx2_eth_rxq *rxq = rx_queue; + uint32_t head, tail; + + nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev), + &head, &tail, rxq->rq); + + return nix_offset_has_packet(head, tail, offset); +} + +int +otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset) +{ + struct otx2_eth_rxq *rxq = rx_queue; + uint32_t head, tail; + + if (rxq->qlen >= offset) + return -EINVAL; + + nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev), + &head, &tail, rxq->rq); + + if (nix_offset_has_packet(head, tail, offset)) + return RTE_ETH_RX_DESC_DONE; + else + return RTE_ETH_RX_DESC_AVAIL; +} + +/* It is a NOP for octeontx2 as HW frees the buffer on xmit */ +int +otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt) +{ + RTE_SET_USED(txq); + RTE_SET_USED(free_cnt); + + return 0; +} + int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) { From patchwork Sun Jun 30 18:05:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55704 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1162A1BB09; Sun, 30 Jun 2019 20:08:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E667E1B9B2 for ; Sun, 30 Jun 2019 20:08:00 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5DtY027158; Sun, 30 Jun 2019 11:08:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=/E2NnPXfYu6rdsYmE/UMHNDzAcWwZke1p1VuLL6oAw8=; b=QvUop7jadIIWxDX7isr16OwAdr3lOH8b6j9n8eK2JngP3dG8UghkOHpVZzgA36ZAGNNv tB9KxvZw5AQ9BeXTM99qi0Cgi0CwAia20cPXvaKnA17BkNNl4/RIl6K8myzP96x3DbEA hCEeFt50OrlR8bSTedn8LcqZrq9Z3SL2J7n5q/4R06TsEkFr2AT4AH08Z1aAc8OfTdiA 6EP5Mp9ZW27OeTPZXdxuoxAFuhTsUNC28jNGv5bXSxwC0HDbyuWAFFyRqqgWGy6OaDGU BvF+M18D6v8WbC4VT+1JJ5EBA1b5JnfUuyJGqffDotLdQOuRwRDHQzd9YMf49cibFvaW 5g== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gjh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:00 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:07:59 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:07:59 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 08A463F703F; Sun, 30 Jun 2019 11:07:56 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:40 +0530 Message-ID: <20190630180609.36705-29-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 28/57] net/octeontx2: add module EEPROM dump X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru add module EEPROM dump operation. Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 51 ++++++++++++++++++++++ 6 files changed, 60 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index f07b64f24..87141244a 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -26,6 +26,7 @@ Rx descriptor status = Y Basic stats = Y Stats per queue = Y Extended stats = Y +Module EEPROM dump = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 911c926e4..dafbe003c 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -26,6 +26,7 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +Module EEPROM dump = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index e275e6469..7fba7e1d9 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -22,6 +22,7 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +Module EEPROM dump = Y Registers dump = Y Linux VFIO = Y ARMv8 = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index feeba5c96..58c2f97b5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1300,6 +1300,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .rx_descriptor_status = otx2_nix_rx_descriptor_status, .tx_done_cleanup = otx2_nix_tx_done_cleanup, .pool_ops_supported = otx2_nix_pool_ops_supported, + .get_module_info = otx2_nix_get_module_info, + .get_module_eeprom = otx2_nix_get_module_eeprom, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8f2691c80..5dd5d8c8b 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -274,6 +274,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) /* Ops */ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_module_info *modinfo); +int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, + struct rte_dev_eeprom_info *info); int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool); void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 44cc17200..2a949439a 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -220,6 +220,57 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) return -ENOTSUP; } +static struct cgx_fw_data * +nix_get_fwdata(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct cgx_fw_data *rsp = NULL; + + otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox); + + otx2_mbox_process_msg(mbox, (void *)&rsp); + + return rsp; +} + +int +otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_module_info *modinfo) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_fw_data *rsp; + + rsp = nix_get_fwdata(dev); + if (rsp == NULL) + return -EIO; + + modinfo->type = rsp->fwdata.sfp_eeprom.sff_id; + modinfo->eeprom_len = SFP_EEPROM_SIZE; + + return 0; +} + +int +otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, + struct rte_dev_eeprom_info *info) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_fw_data *rsp; + + if (!info->data || !info->length || + (info->offset + info->length > SFP_EEPROM_SIZE)) + return -EINVAL; + + rsp = nix_get_fwdata(dev); + if (rsp == NULL) + return -EIO; + + otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset, + info->length); + + return 0; +} + void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) { From patchwork Sun Jun 30 18:05:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55721 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C13F21BB20; Sun, 30 Jun 2019 20:08:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 94A4A1B9C2 for ; Sun, 30 Jun 2019 20:08:04 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5cCL016218; Sun, 30 Jun 2019 11:08:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=CRhuSX3T1us13a2MKqzR8UxsIRrTe1nQtu3PHfbs9A8=; b=XH9Sdlzla+pGP1kcv+f0uotVlshilibITNYhbib+7nml+6sNtfLDlNNi/vZJrNE741Gb eJxFpOj7oafebNSwAcAL8HSIElKvRZbQjYXzBYJfWqC3WsayEyYCpCXN/CnDyOQu0+PU HcEDrfUkMeKkGz3PrNqVdTAtE5wJm4R9o3m7S9zAspJ0FNtabkm0WtVH6jzoU3trggnM 7qQ1Mmo04qtQFoCyr+rfh6NSE7eGK+FuR7AMAF5LNYIl78hp2FEMPJjTXsGtfBuUw2Gv jtetKmk/7tkZvqV8lVseyexAqV2Wbuj8CR05TXsrDDXO/Xz1Q0JKk3REG4eV62Wx/0Xm CA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3ybw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:03 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:02 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:02 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2EF343F703F; Sun, 30 Jun 2019 11:07:59 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:41 +0530 Message-ID: <20190630180609.36705-30-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 29/57] net/octeontx2: add flow control support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add flow control operations and exposed otx2_nix_update_flow_ctrl_mode() to enable on the configured mode in dev_start(). Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 20 ++ drivers/net/octeontx2/otx2_ethdev.h | 23 +++ drivers/net/octeontx2/otx2_flow_ctrl.c | 220 +++++++++++++++++++++ 8 files changed, 268 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 87141244a..00feb0cf2 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Flow control = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index dafbe003c..f3f812804 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Flow control = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 07e44b031..20281b030 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -25,6 +25,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - MAC filtering - Port hardware statistics - Link state information +- Link flow control - Debug utilities - Context dump and error interrupt support Prerequisites diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index d434b0b9d..4a361846f 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -35,6 +35,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_stats.c \ otx2_lookup.c \ otx2_ethdev.c \ + otx2_flow_ctrl.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ otx2_ethdev_debug.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 3dff3e53d..4b56f4461 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -10,6 +10,7 @@ sources = files( 'otx2_stats.c', 'otx2_lookup.c', 'otx2_ethdev.c', + 'otx2_flow_ctrl.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', 'otx2_ethdev_debug.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 58c2f97b5..19b502903 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -216,6 +216,14 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); + /* TX pause frames enable flowctrl on RX side */ + if (dev->fc_info.tx_pause) { + /* Single bpid is allocated for all rx channels for now */ + aq->cq.bpid = dev->fc_info.bpid[0]; + aq->cq.bp = NIX_CQ_BP_LEVEL; + aq->cq.bp_ena = 1; + } + /* Many to one reduction */ aq->cq.qint_idx = qid % dev->qints; @@ -1073,6 +1081,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { + otx2_nix_rxchan_bpid_cfg(eth_dev, false); oxt2_nix_unregister_queue_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); rc = nix_store_queue_cfg_and_then_release(eth_dev); @@ -1126,6 +1135,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true); + if (rc) { + otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc); + goto free_nix_lf; + } + /* * Restore queue config when reconfigure followed by * reconfigure and no queue configure invoked from application case. @@ -1302,6 +1317,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .pool_ops_supported = otx2_nix_pool_ops_supported, .get_module_info = otx2_nix_get_module_info, .get_module_eeprom = otx2_nix_get_module_eeprom, + .flow_ctrl_get = otx2_nix_flow_ctrl_get, + .flow_ctrl_set = otx2_nix_flow_ctrl_set, }; static inline int @@ -1503,6 +1520,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Disable nix bpid config */ + otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Free up SQs */ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 5dd5d8c8b..03ecd32ec 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -87,6 +87,9 @@ #define NIX_TX_NB_SEG_MAX 9 #endif +/* Apply BP when CQ is 75% full */ +#define NIX_CQ_BP_LEVEL (25 * 256 / 100) + #define CQ_OP_STAT_OP_ERR 63 #define CQ_OP_STAT_CQ_ERR 46 @@ -169,6 +172,14 @@ struct otx2_npc_flow_info { uint16_t flow_max_priority; }; +struct otx2_fc_info { + enum rte_eth_fc_mode mode; /**< Link flow control mode */ + uint8_t rx_pause; + uint8_t tx_pause; + uint8_t chan_cnt; + uint16_t bpid[NIX_MAX_CHAN]; +}; + struct otx2_eth_dev { OTX2_DEV; /* Base class */ MARKER otx2_eth_dev_data_start; @@ -216,6 +227,7 @@ struct otx2_eth_dev { struct otx2_nix_tm_node_list node_list; struct otx2_nix_tm_shaper_profile_list shaper_profile_list; struct otx2_rss_info rss_info; + struct otx2_fc_info fc_info; uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; @@ -368,6 +380,17 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +/* Flow Control */ +int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf); + +int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf); + +int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb); + +int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev); + /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c new file mode 100644 index 000000000..0392086d8 --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow_ctrl.c @@ -0,0 +1,220 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +int +otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct otx2_mbox *mbox = dev->mbox; + struct nix_bp_cfg_req *req; + struct nix_bp_cfg_rsp *rsp; + int rc; + + if (otx2_dev_is_vf(dev)) + return 0; + + if (enb) { + req = otx2_mbox_alloc_msg_nix_bp_enable(mbox); + req->chan_base = 0; + req->chan_cnt = 1; + req->bpid_per_chan = 0; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc || req->chan_cnt != rsp->chan_cnt) { + otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d", + rsp->chan_cnt, req->chan_cnt, rc); + return rc; + } + + fc->bpid[0] = rsp->chan_bpid[0]; + } else { + req = otx2_mbox_alloc_msg_nix_bp_disable(mbox); + req->chan_base = 0; + req->chan_cnt = 1; + + rc = otx2_mbox_process(mbox); + + memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN); + } + + return rc; +} + +int +otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_pause_frm_cfg *req, *rsp; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox); + req->set = 0; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto done; + + if (rsp->rx_pause && rsp->tx_pause) + fc_conf->mode = RTE_FC_FULL; + else if (rsp->rx_pause) + fc_conf->mode = RTE_FC_RX_PAUSE; + else if (rsp->tx_pause) + fc_conf->mode = RTE_FC_TX_PAUSE; + else + fc_conf->mode = RTE_FC_NONE; + +done: + return rc; +} + +static int +otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + struct otx2_eth_rxq *rxq; + int i, rc; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) { + /* The shared memory buffer can be full. + * flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) + return -ENOMEM; + } + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (enb) { + aq->cq.bpid = fc->bpid[0]; + aq->cq_mask.bpid = ~(aq->cq_mask.bpid); + aq->cq.bp = NIX_CQ_BP_LEVEL; + aq->cq_mask.bp = ~(aq->cq_mask.bp); + } + + aq->cq.bp_ena = !!enb; + aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); + } + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + return 0; +} + +static int +otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb) +{ + return otx2_nix_cq_bp_cfg(eth_dev, enb); +} + +int +otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct otx2_mbox *mbox = dev->mbox; + struct cgx_pause_frm_cfg *req; + uint8_t tx_pause, rx_pause; + int rc = 0; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time || + fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) { + otx2_info("Flowctrl parameter is not supported"); + return -EINVAL; + } + + if (fc_conf->mode == fc->mode) + return 0; + + rx_pause = (fc_conf->mode == RTE_FC_FULL) || + (fc_conf->mode == RTE_FC_RX_PAUSE); + tx_pause = (fc_conf->mode == RTE_FC_FULL) || + (fc_conf->mode == RTE_FC_TX_PAUSE); + + /* Check if TX pause frame is already enabled or not */ + if (fc->tx_pause ^ tx_pause) { + if (otx2_dev_is_A0(dev) && eth_dev->data->dev_started) { + /* on A0, CQ should be in disabled state + * while setting flow control configuration. + */ + otx2_info("Stop the port=%d for setting flow control\n", + eth_dev->data->port_id); + return 0; + } + /* TX pause frames, enable/disable flowctrl on RX side. */ + rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause); + if (rc) + return rc; + } + + req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox); + req->set = 1; + req->rx_pause = rx_pause; + req->tx_pause = tx_pause; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + fc->tx_pause = tx_pause; + fc->rx_pause = rx_pause; + fc->mode = fc_conf->mode; + + return rc; +} + +int +otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_fc_conf fc_conf; + + if (otx2_dev_is_vf(dev)) + return 0; + + memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf)); + /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW + * by AF driver, update those info in PMD structure. + */ + otx2_nix_flow_ctrl_get(eth_dev, &fc_conf); + + /* To avoid Link credit deadlock on A0, disable Tx FC if it's enabled */ + if (otx2_dev_is_A0(dev) && + (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) { + fc_conf.mode = + (fc_conf.mode == RTE_FC_FULL || + fc_conf.mode == RTE_FC_TX_PAUSE) ? + RTE_FC_TX_PAUSE : RTE_FC_NONE; + } + + return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf); +} From patchwork Sun Jun 30 18:05:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55716 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D773D1BB7D; Sun, 30 Jun 2019 20:10:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C531E1B9F8 for ; Sun, 30 Jun 2019 20:08:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5hwP027703; Sun, 30 Jun 2019 11:08:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ZI1lM2XnI1pLNGqsCnVKnskoCi5ObYWEoCwwqf60er0=; b=lL35uifIxuY0QzCcnNNvxvt6KUkjw4Gp4SIz+GXGKD8vRrBR3zGUdWtfKOGhGlsL2H/0 /wvHin7YGaDUeK+PyF0Sq2bSBw/r1AYXYTL+Fe3kn9gX4CfQdaAAwFwIYf6P9oss6aPz NeRIYwhL10uK486NB6QzmOo9QhTBk/p5mIAALQb6ZjOxdFvf7Q+VaQX8fa2Ujy1CD5pl AV0xDbu6797SX8ERCdqMs5nOGSflCXJefHy5phCJfhbp/BRuCDndHrtg5/6zBTanGdnd uWy3S+0pR6whnNtjbNtLNbGIwvgdFWyT+ZARzNyOoR1PKoBe+mgzKT5QdkwRirsLEou2 TA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gk8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:06 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:05 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:05 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 6E7203F7041; Sun, 30 Jun 2019 11:08:03 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic CC: Harman Kalra , Zyta Szpak Date: Sun, 30 Jun 2019 23:35:42 +0530 Message-ID: <20190630180609.36705-31-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 30/57] net/octeontx2: add PTP base support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Harman Kalra Add PTP enable and disable operations. Signed-off-by: Harman Kalra Signed-off-by: Zyta Szpak --- doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 22 ++++- drivers/net/octeontx2/otx2_ethdev.h | 17 ++++ drivers/net/octeontx2/otx2_ptp.c | 135 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 11 +++ 7 files changed, 185 insertions(+), 3 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_ptp.c diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 20281b030..41eb3c7b9 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -27,6 +27,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Link state information - Link flow control - Debug utilities - Context dump and error interrupt support +- IEEE1588 timestamping Prerequisites ------------- diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 4a361846f..a0155e727 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -31,6 +31,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ + otx2_ptp.c \ otx2_link.c \ otx2_stats.c \ otx2_lookup.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 4b56f4461..2cac57d2b 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -6,6 +6,7 @@ sources = files( 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', + 'otx2_ptp.c', 'otx2_link.c', 'otx2_stats.c', 'otx2_lookup.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 19b502903..29e8130f4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -336,9 +336,7 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) static inline int nix_get_data_off(struct otx2_eth_dev *dev) { - RTE_SET_USED(dev); - - return 0; + return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0; } uint64_t @@ -450,6 +448,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, rxq->qlen = nix_qsize_to_val(qsize); rxq->qsize = qsize; rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get(); + rxq->tstamp = &dev->tstamp; /* Alloc completion queue */ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp); @@ -717,6 +716,7 @@ otx2_nix_form_default_desc(struct otx2_eth_txq *txq) send_mem->dsz = 0x0; send_mem->wmem = 0x1; send_mem->alg = NIX_SENDMEMALG_SETTSTMP; + send_mem->addr = txq->dev->tstamp.tx_tstamp_iova; } sg = (union nix_send_sg_s *)&txq->cmd[4]; } else { @@ -1141,6 +1141,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Enable PTP if it was requested by the app or if it is already + * enabled in PF owning this VF + */ + memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info)); + if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || + otx2_ethdev_is_ptp_en(dev)) + otx2_nix_timesync_enable(eth_dev); + else + otx2_nix_timesync_disable(eth_dev); + /* * Restore queue config when reconfigure followed by * reconfigure and no queue configure invoked from application case. @@ -1319,6 +1329,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .get_module_eeprom = otx2_nix_get_module_eeprom, .flow_ctrl_get = otx2_nix_flow_ctrl_get, .flow_ctrl_set = otx2_nix_flow_ctrl_set, + .timesync_enable = otx2_nix_timesync_enable, + .timesync_disable = otx2_nix_timesync_disable, }; static inline int @@ -1523,6 +1535,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Disable PTP if already enabled */ + if (otx2_ethdev_is_ptp_en(dev)) + otx2_nix_timesync_disable(eth_dev); + /* Free up SQs */ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 03ecd32ec..1ca28add4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -13,6 +13,7 @@ #include #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" @@ -128,6 +129,12 @@ #define NIX_DEFAULT_RSS_CTX_GROUP 0 #define NIX_DEFAULT_RSS_MCAM_IDX -1 +#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en) + +#define NIX_TIMESYNC_TX_CMD_LEN 8 +/* Additional timesync values. */ +#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL + enum nix_q_size_e { nix_q_size_16, /* 16 entries */ nix_q_size_64, /* 64 entries */ @@ -234,6 +241,12 @@ struct otx2_eth_dev { struct otx2_eth_qconf *tx_qconf; struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; + /* PTP counters */ + bool ptp_en; + struct otx2_timesync_info tstamp; + struct rte_timecounter systime_tc; + struct rte_timecounter rx_tstamp_tc; + struct rte_timecounter tx_tstamp_tc; } __rte_cache_aligned; struct otx2_eth_txq { @@ -414,4 +427,8 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, /* Rx and Tx routines */ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); +/* Timesync - PTP routines */ +int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev); +int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c new file mode 100644 index 000000000..105067949 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" + +#define PTP_FREQ_ADJUST (1 << 9) + +static void +nix_start_timecounters(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter)); + memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter)); + memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter)); + + dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK; + dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK; + dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK; +} + +static int +nix_ptp_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + uint8_t rc = 0; + + if (otx2_dev_is_vf(dev)) + return rc; + + if (en) { + /* Enable time stamping of sent PTP packets. */ + otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("MBOX ptp tx conf enable failed: err %d", rc); + return rc; + } + /* Enable time stamping of received PTP packets. */ + otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox); + } else { + /* Disable time stamping of sent PTP packets. */ + otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("MBOX ptp tx conf disable failed: err %d", rc); + return rc; + } + /* Disable time stamping of received PTP packets. */ + otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox); + } + + return otx2_mbox_process(mbox); +} + +int +otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int i, rc = 0; + + if (otx2_ethdev_is_ptp_en(dev)) { + otx2_info("PTP mode is already enabled "); + return -EINVAL; + } + + /* If we are VF, no further action can be taken */ + if (otx2_dev_is_vf(dev)) + return -EINVAL; + + if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) { + otx2_err("Ptype offload is disabled, it should be enabled"); + return -EINVAL; + } + + /* Allocating a iova address for tx tstamp */ + const struct rte_memzone *ts; + ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts", + 0, OTX2_ALIGN, OTX2_ALIGN, + dev->node); + if (ts == NULL) + otx2_err("Failed to allocate mem for tx tstamp addr"); + + dev->tstamp.tx_tstamp_iova = ts->iova; + dev->tstamp.tx_tstamp = ts->addr; + + /* System time should be already on by default */ + nix_start_timecounters(eth_dev); + + dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F; + dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F; + + rc = nix_ptp_config(eth_dev, 1); + if (!rc) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; + otx2_nix_form_default_desc(txq); + } + } + return rc; +} + +int +otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int i, rc = 0; + + if (!otx2_ethdev_is_ptp_en(dev)) { + otx2_nix_dbg("PTP mode is disabled"); + return -EINVAL; + } + + /* If we are VF, nothing else can be done */ + if (otx2_dev_is_vf(dev)) + return -EINVAL; + + dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP; + dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F; + dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F; + + rc = nix_ptp_config(eth_dev, 0); + if (!rc) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; + otx2_nix_form_default_desc(txq); + } + } + return rc; +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 1283fdf37..0c3627c12 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -13,5 +13,16 @@ sizeof(uint16_t)) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) + +#define NIX_TIMESYNC_RX_OFFSET 8 + +struct otx2_timesync_info { + uint64_t rx_tstamp; + rte_iova_t tx_tstamp_iova; + uint64_t *tx_tstamp; + uint8_t tx_ready; + uint8_t rx_ready; +} __rte_cache_aligned; #endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 30 18:05:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55717 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3BB5E1BB78; Sun, 30 Jun 2019 20:11:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 138171B9EA for ; Sun, 30 Jun 2019 20:08:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OK028454; Sun, 30 Jun 2019 11:08:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=9iZrMsh3WCX3n6IZnAHqMSta9bIyII+SjRRlvKvKtz8=; b=CryRKnKD4UqnTL9GgvMpdD4gBylu+fg935H+ayp1hNWzBCKVh8a9RkY2Ng+1nmsDFuH/ wHszK2iTy6ew6rhb5TCs71lrvy9CdSju77Zg6eN+8T4f21kzveF/gA7e+q61Dl40hgcG g7LPl/UnZccbsn+EjXlhYcqmNhVQHCFVi6tsg9E6uHLSEwEyMfQ58g9pYMrjxUAYvJTQ JsxdLJxmRcm2C6iMxcu/Jlmf80qG53tJGD12HH1w1hcfmtIrVYStdSgBoAaisnPpvvVk QHwNkoaAnRp5+dSpBHSAdIVJUEFwyhoG/GYAE6Cz3LAQU/qFWaC0vj+Bk091sTzkaBPa 9w== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gkf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:10 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:09 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DA55C3F703F; Sun, 30 Jun 2019 11:08:06 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Harman Kalra , Zyta Szpak Date: Sun, 30 Jun 2019 23:35:43 +0530 Message-ID: <20190630180609.36705-32-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 31/57] net/octeontx2: add remaining PTP operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Harman Kalra Add remaining PTP configuration/slowpath operations. Timesync feature is available only for PF devices. Signed-off-by: Harman Kalra Signed-off-by: Zyta Szpak --- doc/guides/nics/features/octeontx2.ini | 2 + drivers/net/octeontx2/otx2_ethdev.c | 6 ++ drivers/net/octeontx2/otx2_ethdev.h | 11 +++ drivers/net/octeontx2/otx2_ptp.c | 130 +++++++++++++++++++++++++ 4 files changed, 149 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 00feb0cf2..46fb00be6 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Packet type parsing = Y +Timesync = Y +Timestamp offload = Y Rx descriptor status = Y Basic stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 29e8130f4..7512aacb3 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -47,6 +47,7 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev) static const struct otx2_dev_ops otx2_dev_ops = { .link_status_update = otx2_eth_dev_link_status_update, + .ptp_info_update = otx2_eth_dev_ptp_info_update }; static int @@ -1331,6 +1332,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .flow_ctrl_set = otx2_nix_flow_ctrl_set, .timesync_enable = otx2_nix_timesync_enable, .timesync_disable = otx2_nix_timesync_disable, + .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp, + .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp, + .timesync_adjust_time = otx2_nix_timesync_adjust_time, + .timesync_read_time = otx2_nix_timesync_read_time, + .timesync_write_time = otx2_nix_timesync_write_time, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 1ca28add4..8f8d93a39 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -430,5 +430,16 @@ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); /* Timesync - PTP routines */ int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev); int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev); +int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp, + uint32_t flags); +int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp); +int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta); +int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev, + const struct timespec *ts); +int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, + struct timespec *ts); +int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en); #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c index 105067949..5291da241 100644 --- a/drivers/net/octeontx2/otx2_ptp.c +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -57,6 +57,23 @@ nix_ptp_config(struct rte_eth_dev *eth_dev, int en) return otx2_mbox_process(mbox); } +int +otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en) +{ + struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev; + struct rte_eth_dev *eth_dev = otx2_dev->eth_dev; + int i; + + otx2_dev->ptp_en = ptp_en; + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i]; + rxq->mbuf_initializer = + otx2_nix_rxq_mbuf_setup(otx2_dev, + eth_dev->data->port_id); + } + return 0; +} + int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev) { @@ -133,3 +150,116 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev) } return rc; } + +int +otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp, + uint32_t __rte_unused flags) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_timesync_info *tstamp = &dev->tstamp; + uint64_t ns; + + if (!tstamp->rx_ready) + return -EINVAL; + + ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp); + *timestamp = rte_ns_to_timespec(ns); + tstamp->rx_ready = 0; + + otx2_nix_dbg("rx timestamp: %llu sec: %lu nsec %lu", + (unsigned long long)tstamp->rx_tstamp, timestamp->tv_sec, + timestamp->tv_nsec); + + return 0; +} + +int +otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_timesync_info *tstamp = &dev->tstamp; + uint64_t ns; + + if (*tstamp->tx_tstamp == 0) + return -EINVAL; + + ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp); + *timestamp = rte_ns_to_timespec(ns); + + otx2_nix_dbg("tx timestamp: %llu sec: %lu nsec %lu", + *(unsigned long long *)tstamp->tx_tstamp, + timestamp->tv_sec, timestamp->tv_nsec); + + *tstamp->tx_tstamp = 0; + rte_wmb(); + + return 0; +} + +int +otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct ptp_req *req; + struct ptp_rsp *rsp; + int rc; + + /* Adjust the frequent to make tics increments in 10^9 tics per sec */ + if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) { + req = otx2_mbox_alloc_msg_ptp_op(mbox); + req->op = PTP_OP_ADJFINE; + req->scaled_ppm = delta; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + } + dev->systime_tc.nsec += delta; + dev->rx_tstamp_tc.nsec += delta; + dev->tx_tstamp_tc.nsec += delta; + + return 0; +} + +int +otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev, + const struct timespec *ts) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t ns; + + ns = rte_timespec_to_ns(ts); + /* Set the time counters to a new value. */ + dev->systime_tc.nsec = ns; + dev->rx_tstamp_tc.nsec = ns; + dev->tx_tstamp_tc.nsec = ns; + + return 0; +} + +int +otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct ptp_req *req; + struct ptp_rsp *rsp; + uint64_t ns; + int rc; + + req = otx2_mbox_alloc_msg_ptp_op(mbox); + req->op = PTP_OP_GET_CLOCK; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + ns = rte_timecounter_update(&dev->systime_tc, rsp->clk); + *ts = rte_ns_to_timespec(ns); + + otx2_nix_dbg("PTP time read: %ld.%09ld", ts->tv_sec, ts->tv_nsec); + + return 0; +} From patchwork Sun Jun 30 18:05:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55718 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 46FD81BBEE; Sun, 30 Jun 2019 20:11:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 0449C1B9C6 for ; Sun, 30 Jun 2019 20:08:13 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6fu7028350 for ; Sun, 30 Jun 2019 11:08:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=iiCWDZmt05afuE8I17iWwoAVhf05gu2V2fNfTt8XJI8=; b=B5ffTVu//z2pMWuOXe1jAT11S9OaNgGeZKeMZZD2S4wz7HTLUv39elSGRsAawCq2W1AN UsZ0qx03aUN1kuCt6y7TbHHQBXiRWu7tUVkwTV1oqN2TUscv/ude1Nxy/kwuzj2TAL2l 8rSLLQ5yQ0oUailqdCPnmsiRXybGT9hdhL8fLd1ynL4KIeroytHwDYJT2/DQGb6iGCZn liAmhfwCq3hSgs+/ATAb/lWgpSeadV6Eu1z3Xj4pOztKlNBLzahpBxSnEpIPkHr9cVzm zxTcAIxX87uY6Wmy7qcGK8qlm42CFb02ahNVtuH06hsmMc+PeboNsdscROtZ6lF+dEJk 4w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gkm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:13 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:12 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 5D5743F703F; Sun, 30 Jun 2019 11:08:10 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:44 +0530 Message-ID: <20190630180609.36705-33-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 32/57] net/octeontx2: introducing flow driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Introducing flow infra for octeontx2. This will be used to maintain rte_flow rules. Create, destroy, validate, query, flush, isolate flow operations will be supported. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.h | 388 ++++++++++++++++++++++++++++++ 1 file changed, 388 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow.h diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h new file mode 100644 index 000000000..95bb6c2bf --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow.h @@ -0,0 +1,388 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_FLOW_H__ +#define __OTX2_FLOW_H__ + +#include + +#include +#include +#include + +#include "otx2_common.h" +#include "otx2_ethdev.h" +#include "otx2_mbox.h" + +int otx2_flow_init(struct otx2_eth_dev *hw); +int otx2_flow_fini(struct otx2_eth_dev *hw); +extern const struct rte_flow_ops otx2_flow_ops; + +enum { + OTX2_INTF_RX = 0, + OTX2_INTF_TX = 1, + OTX2_INTF_MAX = 2, +}; + +#define NPC_IH_LENGTH 8 +#define NPC_TPID_LENGTH 2 +#define NPC_COUNTER_NONE (-1) +/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */ +#define NPC_MAX_EXTRACT_DATA_LEN (64) +#define NPC_LDATA_LFLAG_LEN (16) +#define NPC_MCAM_TOT_ENTRIES (4096) +#define NPC_MAX_KEY_NIBBLES (31) +/* Nibble offsets */ +#define NPC_LAYER_KEYX_SZ (3) +#define NPC_PARSE_KEX_S_LA_OFFSET (7) +#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \ + ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \ + + NPC_PARSE_KEX_S_LA_OFFSET) + + +/* supported flow actions flags */ +#define OTX2_FLOW_ACT_MARK (1 << 0) +#define OTX2_FLOW_ACT_FLAG (1 << 1) +#define OTX2_FLOW_ACT_DROP (1 << 2) +#define OTX2_FLOW_ACT_QUEUE (1 << 3) +#define OTX2_FLOW_ACT_RSS (1 << 4) +#define OTX2_FLOW_ACT_DUP (1 << 5) +#define OTX2_FLOW_ACT_SEC (1 << 6) +#define OTX2_FLOW_ACT_COUNT (1 << 7) + +/* terminating actions */ +#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \ + OTX2_FLOW_ACT_QUEUE | \ + OTX2_FLOW_ACT_RSS | \ + OTX2_FLOW_ACT_DUP | \ + OTX2_FLOW_ACT_SEC) + +/* This mark value indicates flag action */ +#define OTX2_FLOW_FLAG_VAL (0xffff) + +#define NIX_RX_ACT_MATCH_OFFSET (40) +#define NIX_RX_ACT_MATCH_MASK (0xFFFF) + +#define NIX_RSS_ACT_GRP_OFFSET (20) +#define NIX_RSS_ACT_ALG_OFFSET (56) +#define NIX_RSS_ACT_GRP_MASK (0xFFFFF) +#define NIX_RSS_ACT_ALG_MASK (0x1F) + +/* PMD-specific definition of the opaque struct rte_flow */ +#define OTX2_MAX_MCAM_WIDTH_DWORDS 7 + +enum npc_mcam_intf { + NPC_MCAM_RX, + NPC_MCAM_TX +}; + +struct npc_xtract_info { + /* Length in bytes of pkt data extracted. len = 0 + * indicates that extraction is disabled. + */ + uint8_t len; + uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */ + uint8_t key_off; /* Byte offset in MCAM key where data is placed */ + uint8_t enable; /* Extraction enabled or disabled */ +}; + +/* Information for a given {LAYER, LTYPE} */ +struct npc_lid_lt_xtract_info { + /* Info derived from parser configuration */ + uint16_t npc_proto; /* Network protocol identified */ + uint8_t valid_flags_mask; /* Flags applicable */ + uint8_t is_terminating:1; /* No more parsing */ + struct npc_xtract_info xtract[NPC_MAX_LD]; +}; + +union npc_kex_ldata_flags_cfg { + struct { + #if defined(__BIG_ENDIAN_BITFIELD) + uint64_t rvsd_62_1 : 61; + uint64_t lid : 3; + #else + uint64_t lid : 3; + uint64_t rvsd_62_1 : 61; + #endif + } s; + + uint64_t i; +}; + +typedef struct npc_lid_lt_xtract_info + otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]; +typedef struct npc_lid_lt_xtract_info + otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; +typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD]; + + +/* MBOX_MSG_NPC_GET_DATAX_CFG Response */ +struct npc_get_datax_cfg { + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD]; + /* Extract information indexed with [LID][LTYPE] */ + struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT]; + /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE] + * Fields flags_ena_ld0, flags_ena_ld1 in + * struct npc_lid_lt_xtract_info indicate if this is applicable + * for a given {LAYER, LTYPE} + */ + struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT]; +}; + +struct otx2_mcam_ents_info { + /* Current max & min values of mcam index */ + uint32_t max_id; + uint32_t min_id; + uint32_t free_ent; + uint32_t live_ent; +}; + +struct rte_flow { + uint8_t nix_intf; + uint32_t mcam_id; + int32_t ctr_id; + uint32_t priority; + /* Contiguous match string */ + uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS]; + uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS]; + uint64_t npc_action; + TAILQ_ENTRY(rte_flow) next; +}; + +TAILQ_HEAD(otx2_flow_list, rte_flow); + +/* Accessed from ethdev private - otx2_eth_dev */ +struct otx2_npc_flow_info { + rte_atomic32_t mark_actions; + uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */ + uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */ + uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */ + uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */ + uint32_t mcam_entries; /* mcam entries supported */ + otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */ + otx2_fxcfg_t prx_fxcfg; /* Flag extract */ + otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */ + /* mcam entry info per priority level: both free & in-use */ + struct otx2_mcam_ents_info *flow_entry_info; + /* Bitmap of free preallocated entries in ascending index & + * descending priority + */ + struct rte_bitmap **free_entries; + /* Bitmap of free preallocated entries in descending index & + * ascending priority + */ + struct rte_bitmap **free_entries_rev; + /* Bitmap of live entries in ascending index & descending priority */ + struct rte_bitmap **live_entries; + /* Bitmap of live entries in descending index & ascending priority */ + struct rte_bitmap **live_entries_rev; + /* Priority bucket wise tail queue of all rte_flow resources */ + struct otx2_flow_list *flow_list; + uint32_t rss_grps; /* rss groups supported */ + struct rte_bitmap *rss_grp_entries; + uint16_t channel; /*rx channel */ + uint16_t flow_prealloc_size; + uint16_t flow_max_priority; +}; + +struct otx2_parse_state { + struct otx2_npc_flow_info *npc; + const struct rte_flow_item *pattern; + const struct rte_flow_item *last_pattern; /* Temp usage */ + struct rte_flow_error *error; + struct rte_flow *flow; + uint8_t tunnel; + uint8_t terminate; + uint8_t layer_mask; + uint8_t lt[NPC_MAX_LID]; + uint8_t flags[NPC_MAX_LID]; + uint8_t *mcam_data; /* point to flow->mcam_data + key_len */ + uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */ +}; + +struct otx2_flow_item_info { + const void *def_mask; /* rte_flow default mask */ + void *hw_mask; /* hardware supported mask */ + int len; /* length of item */ + const void *spec; /* spec to use, NULL implies match any */ + const void *mask; /* mask to use */ + uint8_t hw_hdr_len; /* Extra data len at each layer*/ +}; + +struct otx2_idev_kex_cfg { + struct npc_get_kex_cfg_rsp kex_cfg; + rte_atomic16_t kex_refcnt; +}; + +enum npc_kpu_parser_flag { + NPC_F_NA = 0, + NPC_F_PKI, + NPC_F_PKI_VLAN, + NPC_F_PKI_ETAG, + NPC_F_PKI_ITAG, + NPC_F_PKI_MPLS, + NPC_F_PKI_NSH, + NPC_F_ETYPE_UNK, + NPC_F_ETHER_VLAN, + NPC_F_ETHER_ETAG, + NPC_F_ETHER_ITAG, + NPC_F_ETHER_MPLS, + NPC_F_ETHER_NSH, + NPC_F_STAG_CTAG, + NPC_F_STAG_CTAG_UNK, + NPC_F_STAG_STAG_CTAG, + NPC_F_STAG_STAG_STAG, + NPC_F_QINQ_CTAG, + NPC_F_QINQ_CTAG_UNK, + NPC_F_QINQ_QINQ_CTAG, + NPC_F_QINQ_QINQ_QINQ, + NPC_F_BTAG_ITAG, + NPC_F_BTAG_ITAG_STAG, + NPC_F_BTAG_ITAG_CTAG, + NPC_F_BTAG_ITAG_UNK, + NPC_F_ETAG_CTAG, + NPC_F_ETAG_BTAG_ITAG, + NPC_F_ETAG_STAG, + NPC_F_ETAG_QINQ, + NPC_F_ETAG_ITAG, + NPC_F_ETAG_ITAG_STAG, + NPC_F_ETAG_ITAG_CTAG, + NPC_F_ETAG_ITAG_UNK, + NPC_F_ITAG_STAG_CTAG, + NPC_F_ITAG_STAG, + NPC_F_ITAG_CTAG, + NPC_F_MPLS_4_LABELS, + NPC_F_MPLS_3_LABELS, + NPC_F_MPLS_2_LABELS, + NPC_F_IP_HAS_OPTIONS, + NPC_F_IP_IP_IN_IP, + NPC_F_IP_6TO4, + NPC_F_IP_MPLS_IN_IP, + NPC_F_IP_UNK_PROTO, + NPC_F_IP_IP_IN_IP_HAS_OPTIONS, + NPC_F_IP_6TO4_HAS_OPTIONS, + NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS, + NPC_F_IP_UNK_PROTO_HAS_OPTIONS, + NPC_F_IP6_HAS_EXT, + NPC_F_IP6_TUN_IP6, + NPC_F_IP6_MPLS_IN_IP, + NPC_F_TCP_HAS_OPTIONS, + NPC_F_TCP_HTTP, + NPC_F_TCP_HTTPS, + NPC_F_TCP_PPTP, + NPC_F_TCP_UNK_PORT, + NPC_F_TCP_HTTP_HAS_OPTIONS, + NPC_F_TCP_HTTPS_HAS_OPTIONS, + NPC_F_TCP_PPTP_HAS_OPTIONS, + NPC_F_TCP_UNK_PORT_HAS_OPTIONS, + NPC_F_UDP_VXLAN, + NPC_F_UDP_VXLAN_NOVNI, + NPC_F_UDP_VXLAN_NOVNI_NSH, + NPC_F_UDP_VXLANGPE, + NPC_F_UDP_VXLANGPE_NSH, + NPC_F_UDP_VXLANGPE_MPLS, + NPC_F_UDP_VXLANGPE_NOVNI, + NPC_F_UDP_VXLANGPE_NOVNI_NSH, + NPC_F_UDP_VXLANGPE_NOVNI_MPLS, + NPC_F_UDP_VXLANGPE_UNK, + NPC_F_UDP_VXLANGPE_NONP, + NPC_F_UDP_GTP_GTPC, + NPC_F_UDP_GTP_GTPU_G_PDU, + NPC_F_UDP_GTP_GTPU_UNK, + NPC_F_UDP_UNK_PORT, + NPC_F_UDP_GENEVE, + NPC_F_UDP_GENEVE_OAM, + NPC_F_UDP_GENEVE_CRI_OPT, + NPC_F_UDP_GENEVE_OAM_CRI_OPT, + NPC_F_GRE_NVGRE, + NPC_F_GRE_HAS_SRE, + NPC_F_GRE_HAS_CSUM, + NPC_F_GRE_HAS_KEY, + NPC_F_GRE_HAS_SEQ, + NPC_F_GRE_HAS_CSUM_KEY, + NPC_F_GRE_HAS_CSUM_SEQ, + NPC_F_GRE_HAS_KEY_SEQ, + NPC_F_GRE_HAS_CSUM_KEY_SEQ, + NPC_F_GRE_HAS_ROUTE, + NPC_F_GRE_UNK_PROTO, + NPC_F_GRE_VER1, + NPC_F_GRE_VER1_HAS_SEQ, + NPC_F_GRE_VER1_HAS_ACK, + NPC_F_GRE_VER1_HAS_SEQ_ACK, + NPC_F_GRE_VER1_UNK_PROTO, + NPC_F_TU_ETHER_UNK, + NPC_F_TU_ETHER_CTAG, + NPC_F_TU_ETHER_CTAG_UNK, + NPC_F_TU_ETHER_STAG_CTAG, + NPC_F_TU_ETHER_STAG_CTAG_UNK, + NPC_F_TU_ETHER_STAG, + NPC_F_TU_ETHER_STAG_UNK, + NPC_F_TU_ETHER_QINQ_CTAG, + NPC_F_TU_ETHER_QINQ_CTAG_UNK, + NPC_F_TU_ETHER_QINQ, + NPC_F_TU_ETHER_QINQ_UNK, + NPC_F_LAST /* has to be the last item */ +}; + + +int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id); + +int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id, + uint64_t *count); + +int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id); + +int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry); + +int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox); + +int otx2_flow_update_parse_state(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, + int lid, int lt, uint8_t flags); + +int otx2_flow_parse_item_basic(const struct rte_flow_item *item, + struct otx2_flow_item_info *info, + struct rte_flow_error *error); + +void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask); + +int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, + struct otx2_mbox *mbox, + struct otx2_parse_state *pst, + struct otx2_npc_flow_info *flow_info); + +void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, + int lid, int lt); + +const struct rte_flow_item * +otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern); + +int otx2_flow_parse_lh(struct otx2_parse_state *pst); + +int otx2_flow_parse_lg(struct otx2_parse_state *pst); + +int otx2_flow_parse_lf(struct otx2_parse_state *pst); + +int otx2_flow_parse_le(struct otx2_parse_state *pst); + +int otx2_flow_parse_ld(struct otx2_parse_state *pst); + +int otx2_flow_parse_lc(struct otx2_parse_state *pst); + +int otx2_flow_parse_lb(struct otx2_parse_state *pst); + +int otx2_flow_parse_la(struct otx2_parse_state *pst); + +int otx2_flow_parse_actions(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow); + +int otx2_flow_free_all_resources(struct otx2_eth_dev *hw); + +int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid); +#endif /* __OTX2_FLOW_H__ */ From patchwork Sun Jun 30 18:05:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55729 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D596B1BBF5; Sun, 30 Jun 2019 20:11:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3EE761BA5B for ; Sun, 30 Jun 2019 20:08:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI50uJ015649 for ; Sun, 30 Jun 2019 11:08:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=wdgq4vpFHm6Of/v/mEkFj62ky4jbc4rmnRvgn6EN4KA=; b=fS/K2pFW1Uy3E9G2jmLv5mslg/oQmyNI7n5U9fPGYBQdDUqtjfm0E4snrYtkzqsq/va8 mar/ndF4MbJ033sgLFXRW2sDo2O9eWGP2P0f4n13OG1LDHRl7ZtRYYsJrUoAD84ikYb9 2/cfR8Wvr+7EDJhWlAg5uPm6Z0b3XMRxGUQTWVNnZqFUxg9s38aWCFh/+kFcURFfGidJ wnlG60W1RRgZZqio6V+Mgs6Zjhi+F26sYceiBscp48rdr2FliqT6LX6Y4kFGZw6IiHR1 a2DP+V3rA9sZURhKKLmysf1D4QrG9fbAJO1oRo43kQkEImfMoSjpHkfZGZD8OtuDaFNl Jw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yck-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:16 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:14 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:14 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2818A3F7040; Sun, 30 Jun 2019 11:08:12 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:45 +0530 Message-ID: <20190630180609.36705-34-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 33/57] net/octeontx2: add flow utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K First pass rte_flow utility functions for octeontx2. These will be used to communicate with AF driver. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.h | 7 +- drivers/net/octeontx2/otx2_flow.h | 2 + drivers/net/octeontx2/otx2_flow_utils.c | 387 ++++++++++++++++++++++++ 5 files changed, 392 insertions(+), 6 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index a0155e727..8d1aeae3f 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_lookup.c \ otx2_ethdev.c \ otx2_flow_ctrl.c \ + otx2_flow_utils.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ otx2_ethdev_debug.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 2cac57d2b..75156ddbe 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -12,6 +12,7 @@ sources = files( 'otx2_lookup.c', 'otx2_ethdev.c', 'otx2_flow_ctrl.c', + 'otx2_flow_utils.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', 'otx2_ethdev_debug.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8f8d93a39..e8a22b6ec 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -17,6 +17,7 @@ #include "otx2_common.h" #include "otx2_dev.h" +#include "otx2_flow.h" #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" @@ -173,12 +174,6 @@ struct otx2_eth_qconf { uint16_t nb_desc; }; -struct otx2_npc_flow_info { - uint16_t channel; /*rx channel */ - uint16_t flow_prealloc_size; - uint16_t flow_max_priority; -}; - struct otx2_fc_info { enum rte_eth_fc_mode mode; /**< Link flow control mode */ uint8_t rx_pause; diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h index 95bb6c2bf..f5cc3b983 100644 --- a/drivers/net/octeontx2/otx2_flow.h +++ b/drivers/net/octeontx2/otx2_flow.h @@ -15,6 +15,8 @@ #include "otx2_ethdev.h" #include "otx2_mbox.h" +struct otx2_eth_dev; + int otx2_flow_init(struct otx2_eth_dev *hw); int otx2_flow_fini(struct otx2_eth_dev *hw); extern const struct rte_flow_ops otx2_flow_ops; diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c new file mode 100644 index 000000000..6078a827b --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow_utils.c @@ -0,0 +1,387 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +int +otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id) +{ + struct npc_mcam_oper_counter_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox); + req->cntr = ctr_id; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +int +otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id, + uint64_t *count) +{ + struct npc_mcam_oper_counter_req *req; + struct npc_mcam_oper_counter_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox); + req->cntr = ctr_id; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + + *count = rsp->stat; + return rc; +} + +int +otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id) +{ + struct npc_mcam_oper_counter_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox); + req->cntr = ctr_id; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +int +otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry) +{ + struct npc_mcam_free_entry_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox); + req->entry = entry; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +int +otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox) +{ + struct npc_mcam_free_entry_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox); + req->all = 1; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +static void +flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len) +{ + int idx; + + for (idx = 0; idx < len; idx++) + ptr[idx] = data[len - 1 - idx]; +} + +static int +flow_check_copysz(size_t size, size_t len) +{ + if (len <= size) + return len; + return -1; +} + +static inline int +flow_mem_is_zero(const void *mem, int len) +{ + const char *m = mem; + int i; + + for (i = 0; i < len; i++) { + if (m[i] != 0) + return 0; + } + return 1; +} + +void +otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, int lid, int lt) +{ + struct npc_xtract_info *xinfo; + char *hw_mask = info->hw_mask; + int max_off, offset; + int i, j; + int intf; + + intf = pst->flow->nix_intf; + xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract; + memset(hw_mask, 0, info->len); + + for (i = 0; i < NPC_MAX_LD; i++) { + if (xinfo[i].hdr_off < info->hw_hdr_len) + continue; + + max_off = xinfo[i].hdr_off + xinfo[i].len - info->hw_hdr_len; + + if (xinfo[i].enable == 0) + continue; + + if (max_off > info->len) + max_off = info->len; + + offset = xinfo[i].hdr_off - info->hw_hdr_len; + for (j = offset; j < max_off; j++) + hw_mask[j] = 0xff; + } +} + +int +otx2_flow_update_parse_state(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, int lid, int lt, + uint8_t flags) +{ + uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN]; + uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN]; + struct npc_lid_lt_xtract_info *xinfo; + int len = 0; + int intf; + int i; + + otx2_npc_dbg("Parse state function info mask total %s", + (const uint8_t *)info->mask); + + pst->layer_mask |= lid; + pst->lt[lid] = lt; + pst->flags[lid] = flags; + + intf = pst->flow->nix_intf; + xinfo = &pst->npc->prx_dxcfg[intf][lid][lt]; + otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating); + if (xinfo->is_terminating) + pst->terminate = 1; + + /* Need to check if flags are supported but in latest + * KPU profile, flags are used as enumeration! No way, + * it can be validated unless MBOX is changed to return + * set of valid values out of 2**8 possible values. + */ + if (info->spec == NULL) { /* Nothing to match */ + otx2_npc_dbg("Info spec NULL"); + goto done; + } + + /* Copy spec and mask into mcam match string, mask. + * Since both RTE FLOW and OTX2 MCAM use network-endianness + * for data, we are saved from nasty conversions. + */ + for (i = 0; i < NPC_MAX_LD; i++) { + struct npc_xtract_info *x; + int k, idx, hdr_off; + + x = &xinfo->xtract[i]; + len = x->len; + hdr_off = x->hdr_off; + + if (hdr_off < info->hw_hdr_len) + continue; + + if (x->enable == 0) + continue; + + otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d," + "x->key_off = %d", x->hdr_off, len, info->len, + x->key_off); + + hdr_off -= info->hw_hdr_len; + + if (hdr_off + len > info->len) + len = info->len - hdr_off; + + /* Check for over-write of previous layer */ + if (!flow_mem_is_zero(pst->mcam_mask + x->key_off, + len)) { + /* Cannot support this data match */ + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + pst->pattern, + "Extraction unsupported"); + return -rte_errno; + } + + len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8) + - x->key_off, + len); + if (len < 0) { + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + pst->pattern, + "Internal Error"); + return -rte_errno; + } + + /* Need to reverse complete structure so that dest addr is at + * MSB so as to program the MCAM using mcam_data & mcam_mask + * arrays + */ + flow_prep_mcam_ldata(int_info, + (const uint8_t *)info->spec + hdr_off, + x->len); + flow_prep_mcam_ldata(int_info_mask, + (const uint8_t *)info->mask + hdr_off, + x->len); + + otx2_npc_dbg("Spec: "); + for (k = 0; k < info->len; k++) + otx2_npc_dbg("0x%.2x ", + ((const uint8_t *)info->spec)[k]); + + otx2_npc_dbg("Int_info: "); + for (k = 0; k < info->len; k++) + otx2_npc_dbg("0x%.2x ", int_info[k]); + + memcpy(pst->mcam_mask + x->key_off, int_info_mask, len); + memcpy(pst->mcam_data + x->key_off, int_info, len); + + otx2_npc_dbg("Parse state mcam data & mask"); + for (idx = 0; idx < len ; idx++) + otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx, + *(pst->mcam_data + idx + x->key_off), idx, + *(pst->mcam_mask + idx + x->key_off)); + } + +done: + /* Next pattern to parse by subsequent layers */ + pst->pattern++; + return 0; +} + +static inline int +flow_range_is_valid(const char *spec, const char *last, const char *mask, + int len) +{ + /* Mask must be zero or equal to spec as we do not support + * non-contiguous ranges. + */ + while (len--) { + if (last[len] && + (spec[len] & mask[len]) != (last[len] & mask[len])) + return 0; /* False */ + } + return 1; +} + + +static inline int +flow_mask_is_supported(const char *mask, const char *hw_mask, int len) +{ + /* + * If no hw_mask, assume nothing is supported. + * mask is never NULL + */ + if (hw_mask == NULL) + return flow_mem_is_zero(mask, len); + + while (len--) { + if ((mask[len] | hw_mask[len]) != hw_mask[len]) + return 0; /* False */ + } + return 1; +} + +int +otx2_flow_parse_item_basic(const struct rte_flow_item *item, + struct otx2_flow_item_info *info, + struct rte_flow_error *error) +{ + /* Item must not be NULL */ + if (item == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Item is NULL"); + return -rte_errno; + } + /* If spec is NULL, both mask and last must be NULL, this + * makes it to match ANY value (eq to mask = 0). + * Setting either mask or last without spec is an error + */ + if (item->spec == NULL) { + if (item->last == NULL && item->mask == NULL) { + info->spec = NULL; + return 0; + } + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "mask or last set without spec"); + return -rte_errno; + } + + /* We have valid spec */ + info->spec = item->spec; + + /* If mask is not set, use default mask, err if default mask is + * also NULL. + */ + if (item->mask == NULL) { + otx2_npc_dbg("Item mask null, using default mask"); + if (info->def_mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "No mask or default mask given"); + return -rte_errno; + } + info->mask = info->def_mask; + } else { + info->mask = item->mask; + } + + /* mask specified must be subset of hw supported mask + * mask | hw_mask == hw_mask + */ + if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Unsupported field in the mask"); + return -rte_errno; + } + + /* Now we have spec and mask. OTX2 does not support non-contiguous + * range. We should have either: + * - spec & mask == last & mask or, + * - last == 0 or, + * - last == NULL + */ + if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) { + if (!flow_range_is_valid(item->spec, item->last, info->mask, + info->len)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Unsupported range for match"); + return -rte_errno; + } + } + + return 0; +} + +void +otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask) +{ + uint64_t cdata[2] = {0ULL, 0ULL}, nibble; + int i, j = 0; + + for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) { + if (nibble_mask & (1 << i)) { + nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf; + cdata[j / 16] |= (nibble << ((j & 0xf) * 4)); + j += 1; + } + } + + data[0] = cdata[0]; + data[1] = cdata[1]; +} + From patchwork Sun Jun 30 18:05:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55719 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFE441BBD8; Sun, 30 Jun 2019 20:11:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 150401BA5B for ; Sun, 30 Jun 2019 20:08:19 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5FMc027215 for ; Sun, 30 Jun 2019 11:08:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=zhxlDQm2py9gQA0GA35Yl18RWQdDmZMH2zQMgDQuM5Q=; b=Qja96rsl5LMe9/A/HHHFMESm9V8gQah8Y7AqshJ+RQb7/rSnJVQABH6vPFSMF1jESGdH Gp7ivhAJ1KwhSa3V/yZ9l2lQ/dAfGto2WDDpt3jl89kqER3KmgdwpOU6HBJEqdiNGTMd 6WtYeyHk5o7aV8EprHDhkDaQo/E8M9TYNePr80GFNYiBvqeIaAi0wYgcZLsPX0dqc1iv gzNu2id/jC3k28YkOd4hHrkRD3SiKDD86ZKryMyeBhf5o7X3ctz2AI57tTIo2FOt3njX ChGSVK8x66CGXNE6EU4Nfns8zNKzQCxClqspzvGjZUTntS7jUsge+K7t1El7JgH6ivMq AQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gkt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:18 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:17 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CF45A3F703F; Sun, 30 Jun 2019 11:08:15 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:46 +0530 Message-ID: <20190630180609.36705-35-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 34/57] net/octeontx2: add flow mbox utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding mailbox utility functions for rte_flow. These will be used to alloc, reserve and write the entries to the device on request. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.h | 6 + drivers/net/octeontx2/otx2_flow_utils.c | 259 ++++++++++++++++++++++++ 2 files changed, 265 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h index f5cc3b983..a37d86512 100644 --- a/drivers/net/octeontx2/otx2_flow.h +++ b/drivers/net/octeontx2/otx2_flow.h @@ -387,4 +387,10 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev, int otx2_flow_free_all_resources(struct otx2_eth_dev *hw); int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid); + +int +flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + struct npc_mcam_alloc_entry_rsp *rsp, + int req_prio); #endif /* __OTX2_FLOW_H__ */ diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c index 6078a827b..c56a22ed1 100644 --- a/drivers/net/octeontx2/otx2_flow_utils.c +++ b/drivers/net/octeontx2/otx2_flow_utils.c @@ -385,3 +385,262 @@ otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask) data[1] = cdata[1]; } +static int +flow_first_set_bit(uint64_t slab) +{ + int num = 0; + + if ((slab & 0xffffffff) == 0) { + num += 32; + slab >>= 32; + } + if ((slab & 0xffff) == 0) { + num += 16; + slab >>= 16; + } + if ((slab & 0xff) == 0) { + num += 8; + slab >>= 8; + } + if ((slab & 0xf) == 0) { + num += 4; + slab >>= 4; + } + if ((slab & 0x3) == 0) { + num += 2; + slab >>= 2; + } + if ((slab & 0x1) == 0) + num += 1; + + return num; +} + +static int +flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + uint32_t old_ent, uint32_t new_ent) +{ + struct npc_mcam_shift_entry_req *req; + struct npc_mcam_shift_entry_rsp *rsp; + struct otx2_flow_list *list; + struct rte_flow *flow_iter; + int rc = 0; + + otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent, + flow->priority); + + list = &flow_info->flow_list[flow->priority]; + + /* Old entry is disabled & it's contents are moved to new_entry, + * new entry is enabled finally. + */ + req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox); + req->curr_entry[0] = old_ent; + req->new_entry[0] = new_ent; + req->shift_count = 1; + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc) + return rc; + + /* Remove old node from list */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id == old_ent) + TAILQ_REMOVE(list, flow_iter, next); + } + + /* Insert node with new mcam id at right place */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > new_ent) + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + } + return rc; +} + +/* Exchange all required entries with a given priority level */ +static int +flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl) +{ + struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp; + uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries; + uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0; + /* Bit position within the slab */ + uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0; + /* Overall bit position of the start of slab */ + /* free & live entry index */ + int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0; + struct otx2_mcam_ents_info *ent_info; + /* free & live bitmap slab */ + uint64_t sl_fr = 0, sl_lv = 0, *sl; + + fr_bmp = flow_info->free_entries[prio_lvl]; + fr_bmp_rev = flow_info->free_entries_rev[prio_lvl]; + lv_bmp = flow_info->live_entries[prio_lvl]; + lv_bmp_rev = flow_info->live_entries_rev[prio_lvl]; + ent_info = &flow_info->flow_entry_info[prio_lvl]; + mcam_entries = flow_info->mcam_entries; + + + /* New entries allocated are always contiguous, but older entries + * already in free/live bitmap can be non-contiguous: so return + * shifted entries should be in non-contiguous format. + */ + while (idx <= rsp->count) { + if (!sl_fr && !sl_lv) { + /* Lower index elements to be exchanged */ + if (dir < 0) { + rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr); + rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv); + otx2_npc_dbg("Fwd slab rc fr %u rc lv %u " + "e_fr %u e_lv %u", rc_fr, rc_lv, + e_fr, e_lv); + } else { + rc_fr = rte_bitmap_scan(fr_bmp_rev, + &sl_fr_bit_off, + &sl_fr); + rc_lv = rte_bitmap_scan(lv_bmp_rev, + &sl_lv_bit_off, + &sl_lv); + + otx2_npc_dbg("Rev slab rc fr %u rc lv %u " + "e_fr %u e_lv %u", rc_fr, rc_lv, + e_fr, e_lv); + } + } + + if (rc_fr) { + fr_bit_pos = flow_first_set_bit(sl_fr); + e_fr = sl_fr_bit_off + fr_bit_pos; + otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos); + } else { + e_fr = ~(0); + } + + if (rc_lv) { + lv_bit_pos = flow_first_set_bit(sl_lv); + e_lv = sl_lv_bit_off + lv_bit_pos; + otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos); + } else { + e_lv = ~(0); + } + + /* First entry is from free_bmap */ + if (e_fr < e_lv) { + bmp = fr_bmp; + e = e_fr; + sl = &sl_fr; + bit_pos = fr_bit_pos; + if (dir > 0) + e_id = mcam_entries - e - 1; + else + e_id = e; + otx2_npc_dbg("Fr e %u e_id %u", e, e_id); + } else { + bmp = lv_bmp; + e = e_lv; + sl = &sl_lv; + bit_pos = lv_bit_pos; + if (dir > 0) + e_id = mcam_entries - e - 1; + else + e_id = e; + + otx2_npc_dbg("Lv e %u e_id %u", e, e_id); + if (idx < rsp->count) + rc = + flow_shift_lv_ent(mbox, flow, + flow_info, e_id, + rsp->entry + idx); + } + + rte_bitmap_clear(bmp, e); + rte_bitmap_set(bmp, rsp->entry + idx); + /* Update entry list, use non-contiguous + * list now. + */ + rsp->entry_list[idx] = e_id; + *sl &= ~(1 << bit_pos); + + /* Update min & max entry identifiers in current + * priority level. + */ + if (dir < 0) { + ent_info->max_id = rsp->entry + idx; + ent_info->min_id = e_id; + } else { + ent_info->max_id = e_id; + ent_info->min_id = rsp->entry; + } + + idx++; + } + return rc; +} + +/* Validate if newly allocated entries lie in the correct priority zone + * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy. + * If not properly aligned, shift entries to do so + */ +int +flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + struct npc_mcam_alloc_entry_rsp *rsp, + int req_prio) +{ + int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority; + struct otx2_mcam_ents_info *info = flow_info->flow_entry_info; + int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1; + uint32_t tot_ent = 0; + + otx2_npc_dbg("Dir %d, priority = %d", dir, prio); + + if (dir < 0) + prio_idx = flow_info->flow_max_priority - 1; + + /* Only live entries needs to be shifted, free entries can just be + * moved by bits manipulation. + */ + + /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting, + * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority + * level entries(lower indexes). + * + * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift, + * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority + * level entries(higher indexes) with highest indexes. + */ + do { + tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent; + + if (dir < 0 && prio_idx != prio && + rsp->entry > info[prio_idx].max_id && tot_ent) { + otx2_npc_dbg("Rsp entry %u prio idx %u " + "max id %u", rsp->entry, prio_idx, + info[prio_idx].max_id); + + needs_shift = 1; + } else if ((dir > 0) && (prio_idx != prio) && + (rsp->entry < info[prio_idx].min_id) && tot_ent) { + otx2_npc_dbg("Rsp entry %u prio idx %u " + "min id %u", rsp->entry, prio_idx, + info[prio_idx].min_id); + needs_shift = 1; + } + + otx2_npc_dbg("Needs_shift = %d", needs_shift); + if (needs_shift) { + needs_shift = 0; + rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir, + prio_idx); + } else { + for (idx = 0; idx < rsp->count; idx++) + rsp->entry_list[idx] = rsp->entry + idx; + } + } while ((prio_idx != prio) && (prio_idx += dir)); + + return rc; +} From patchwork Sun Jun 30 18:05:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55730 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CE1F01BC00; Sun, 30 Jun 2019 20:11:20 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 24CDF1BA57 for ; Sun, 30 Jun 2019 20:08:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OL028454 for ; Sun, 30 Jun 2019 11:08:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=UUXgGR9t/cwVfel4NnRrRocvWMWIkJrRa9rI8vePsZs=; b=LVk0XzH/l6PgBa6/5hQ7fCXsH0PYZwmOe2R8nZRd6AIShLMC+fOsH+thF8CbpzBv47pG F9Rw45h65iAz+EJA0a72YMb7NVLYlNpB+AIq6HnQTiPo4YpApJgV2Bc33O7Y4iiPDVkT RPZbozx7RWIdZIaaKzxrKYmfCnLD6/AcDjOb3IK152B0bLOzMdn9aAvfx+GHnxgiy9rt ha9MAmR7B+k4oHA01iC+oHJXLHVTshzUOUJ89VEXaOTYu/ikoKT2+HgnvxGlBDa5aKQX Ms+A6Da6/HEidL4g16l5D2juflOLynB/Qd1bgNZ+sTNKdpqPvVZDvQdIzjxM6M4Vclhi zA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gm6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:21 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:20 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 967723F703F; Sun, 30 Jun 2019 11:08:18 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:47 +0530 Message-ID: <20190630180609.36705-36-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 35/57] net/octeontx2: add flow MCAM utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding MCAM utility functions to alloc and write the entries. These will be used to arrange the flow rules based on priority. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.h | 6 - drivers/net/octeontx2/otx2_flow_utils.c | 266 +++++++++++++++++++++++- 2 files changed, 265 insertions(+), 7 deletions(-) diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h index a37d86512..f5cc3b983 100644 --- a/drivers/net/octeontx2/otx2_flow.h +++ b/drivers/net/octeontx2/otx2_flow.h @@ -387,10 +387,4 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev, int otx2_flow_free_all_resources(struct otx2_eth_dev *hw); int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid); - -int -flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, - struct otx2_npc_flow_info *flow_info, - struct npc_mcam_alloc_entry_rsp *rsp, - int req_prio); #endif /* __OTX2_FLOW_H__ */ diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c index c56a22ed1..8a0fe7615 100644 --- a/drivers/net/octeontx2/otx2_flow_utils.c +++ b/drivers/net/octeontx2/otx2_flow_utils.c @@ -5,6 +5,22 @@ #include "otx2_ethdev.h" #include "otx2_flow.h" +static int +flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr) +{ + struct npc_mcam_alloc_counter_req *req; + struct npc_mcam_alloc_counter_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox); + req->count = 1; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + + *ctr = rsp->cntr_list[0]; + return rc; +} + int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id) { @@ -585,7 +601,7 @@ flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow, * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy. * If not properly aligned, shift entries to do so */ -int +static int flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, struct otx2_npc_flow_info *flow_info, struct npc_mcam_alloc_entry_rsp *rsp, @@ -644,3 +660,251 @@ flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, return rc; } + +static int +flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio, + int prio_lvl) +{ + struct otx2_mcam_ents_info *info = flow_info->flow_entry_info; + int step = 1; + + while (step < flow_info->flow_max_priority) { + if (((prio_lvl + step) < flow_info->flow_max_priority) && + info[prio_lvl + step].live_ent) { + *prio = NPC_MCAM_HIGHER_PRIO; + return info[prio_lvl + step].min_id; + } + + if (((prio_lvl - step) >= 0) && + info[prio_lvl - step].live_ent) { + otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step, + info[prio_lvl - step].live_ent); + *prio = NPC_MCAM_LOWER_PRIO; + return info[prio_lvl - step].max_id; + } + step++; + } + *prio = NPC_MCAM_ANY_PRIO; + return 0; +} + +static int +flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, uint32_t *free_ent) +{ + struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev; + struct npc_mcam_alloc_entry_rsp rsp_local; + struct npc_mcam_alloc_entry_rsp *rsp_cmd; + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + struct otx2_mcam_ents_info *info; + uint16_t ref_ent, idx; + int rc, prio; + + info = &flow_info->flow_entry_info[flow->priority]; + free_bmp = flow_info->free_entries[flow->priority]; + free_bmp_rev = flow_info->free_entries_rev[flow->priority]; + live_bmp = flow_info->live_entries[flow->priority]; + live_bmp_rev = flow_info->live_entries_rev[flow->priority]; + + ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority); + + req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox); + req->contig = 1; + req->count = flow_info->flow_prealloc_size; + req->priority = prio; + req->ref_entry = ref_ent; + + otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio); + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd); + if (rc) + return rc; + + rsp = &rsp_local; + memcpy(rsp, rsp_cmd, sizeof(*rsp)); + + otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry, + rsp->count, prio); + + /* Non-first ent cache fill */ + if (prio != NPC_MCAM_ANY_PRIO) { + flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp, + prio); + } else { + /* Copy into response entry list */ + for (idx = 0; idx < rsp->count; idx++) + rsp->entry_list[idx] = rsp->entry + idx; + } + + otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count); + /* Update free entries, reverse free entries list, + * min & max entry ids. + */ + for (idx = 0; idx < rsp->count; idx++) { + if (unlikely(rsp->entry_list[idx] < info->min_id)) + info->min_id = rsp->entry_list[idx]; + + if (unlikely(rsp->entry_list[idx] > info->max_id)) + info->max_id = rsp->entry_list[idx]; + + /* Skip entry to be returned, not to be part of free + * list. + */ + if (prio == NPC_MCAM_HIGHER_PRIO) { + if (unlikely(idx == (rsp->count - 1))) { + *free_ent = rsp->entry_list[idx]; + continue; + } + } else { + if (unlikely(!idx)) { + *free_ent = rsp->entry_list[idx]; + continue; + } + } + info->free_ent++; + rte_bitmap_set(free_bmp, rsp->entry_list[idx]); + rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries - + rsp->entry_list[idx] - 1); + + otx2_npc_dbg("Final rsp entry %u rsp entry rev %u", + rsp->entry_list[idx], + flow_info->mcam_entries - rsp->entry_list[idx] - 1); + } + + otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent, + flow_info->mcam_entries - *free_ent - 1); + info->live_ent++; + rte_bitmap_set(live_bmp, *free_ent); + rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1); + + return 0; +} + +static int +flow_check_preallocated_entry_cache(struct otx2_mbox *mbox, + struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info) +{ + struct rte_bitmap *free, *free_rev, *live, *live_rev; + uint32_t pos = 0, free_ent = 0, mcam_entries; + struct otx2_mcam_ents_info *info; + uint64_t slab = 0; + int rc; + + otx2_npc_dbg("Flow priority %u", flow->priority); + + info = &flow_info->flow_entry_info[flow->priority]; + + free_rev = flow_info->free_entries_rev[flow->priority]; + free = flow_info->free_entries[flow->priority]; + live_rev = flow_info->live_entries_rev[flow->priority]; + live = flow_info->live_entries[flow->priority]; + mcam_entries = flow_info->mcam_entries; + + if (info->free_ent) { + rc = rte_bitmap_scan(free, &pos, &slab); + if (rc) { + /* Get free_ent from free entry bitmap */ + free_ent = pos + __builtin_ctzll(slab); + otx2_npc_dbg("Allocated from cache entry %u", free_ent); + /* Remove from free bitmaps and add to live ones */ + rte_bitmap_clear(free, free_ent); + rte_bitmap_set(live, free_ent); + rte_bitmap_clear(free_rev, + mcam_entries - free_ent - 1); + rte_bitmap_set(live_rev, + mcam_entries - free_ent - 1); + + info->free_ent--; + info->live_ent++; + return free_ent; + } + + otx2_npc_dbg("No free entry:its a mess"); + return -1; + } + + rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent); + if (rc) + return rc; + + return free_ent; +} + +int +otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox, + __rte_unused struct otx2_parse_state *pst, + struct otx2_npc_flow_info *flow_info) +{ + int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1); + struct npc_mcam_write_entry_req *req; + struct mbox_msghdr *rsp; + uint16_t ctr = ~(0); + int rc, idx; + int entry; + + if (use_ctr) { + rc = flow_mcam_alloc_counter(mbox, &ctr); + if (rc) + return rc; + } + + entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info); + if (entry < 0) { + otx2_err("Prealloc failed"); + otx2_flow_mcam_free_counter(mbox, ctr); + return NPC_MCAM_ALLOC_FAILED; + } + req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox); + req->set_cntr = use_ctr; + req->cntr = ctr; + req->entry = entry; + otx2_npc_dbg("Alloc & write entry %u", entry); + + req->intf = + (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX; + req->enable_entry = 1; + req->entry_data.action = flow->npc_action; + + /* + * DPDK sets vtag action on per interface basis, not + * per flow basis. It is a matter of how we decide to support + * this pmd specific behavior. There are two ways: + * 1. Inherit the vtag action from the one configured + * for this interface. This can be read from the + * vtag_action configured for default mcam entry of + * this pf_func. + * 2. Do not support vtag action with rte_flow. + * + * Second approach is used now. + */ + req->entry_data.vtag_action = 0ULL; + + for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) { + req->entry_data.kw[idx] = flow->mcam_data[idx]; + req->entry_data.kw_mask[idx] = flow->mcam_mask[idx]; + } + + if (flow->nix_intf == OTX2_INTF_RX) { + req->entry_data.kw[0] |= flow_info->channel; + req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1); + } else { + uint16_t pf_func = (flow->npc_action >> 4) & 0xffff; + + pf_func = htons(pf_func); + req->entry_data.kw[0] |= ((uint64_t)pf_func << 32); + req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32); + } + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc != 0) + return rc; + + flow->mcam_id = entry; + if (use_ctr) + flow->ctr_id = ctr; + return 0; +} From patchwork Sun Jun 30 18:05:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55705 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BD81E1B9C8; Sun, 30 Jun 2019 20:10:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 2E5731BA92 for ; Sun, 30 Jun 2019 20:08:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6fu9028350 for ; Sun, 30 Jun 2019 11:08:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=oyd2ri10+dPwYI19Y4MjxEvX4HiJiDh2Yirce9n2QhI=; b=tAQeN2hHskpfCyxYnI8gudP/k0YvO0xfrNMPCPIewi8IO5TdTzcGrAQOofQSv+IStm9l H+Y7xthF5KzCsCpb1dgrJSVcxevEnCz4fQJzGR2JDnRSAEzOC8/lH50vK2pwb4DJzfJy D2jl7ATSantk3uB+SwegZMsckbFgRomSrqJhMvm3TwTwP9Zow6PxYaAJm74349VZ/NSy PD5eH9zgsuMjcwtu4aBCoIdilTqcsiQx5E4UXD8+TgmgzBkkHlDwNOq0W6LzaNhSutMF c7rCDR7psxJNMr0MwiADf+nixQTmsOqrqR07gmpJHSDP3DizlKFhOLF3wIuP6S65L6lj pg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gm8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:24 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:23 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:23 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 6B7CF3F703F; Sun, 30 Jun 2019 11:08:21 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:48 +0530 Message-ID: <20190630180609.36705-37-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 36/57] net/octeontx2: add flow parsing for outer layers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding functionality to parse outer layers from ld to lh. These will be used parse outer layers L2, L3, L4 and tunnel types. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_flow_parse.c | 459 ++++++++++++++++++++++++ 3 files changed, 461 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 8d1aeae3f..3eb4dba53 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_lookup.c \ otx2_ethdev.c \ otx2_flow_ctrl.c \ + otx2_flow_parse.c \ otx2_flow_utils.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 75156ddbe..f608c4947 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -12,6 +12,7 @@ sources = files( 'otx2_lookup.c', 'otx2_ethdev.c', 'otx2_flow_ctrl.c', + 'otx2_flow_parse.c', 'otx2_flow_utils.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c new file mode 100644 index 000000000..d27a24833 --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -0,0 +1,459 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +const struct rte_flow_item * +otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern) +{ + while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) || + (pattern->type == RTE_FLOW_ITEM_TYPE_ANY)) + pattern++; + + return pattern; +} + +/* + * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP, + * Tunnel+SCTP + */ +int +otx2_flow_parse_lh(struct otx2_parse_state *pst) +{ + struct otx2_flow_item_info info; + char hw_mask[64]; + int lid, lt; + int rc; + + if (!pst->tunnel) + return 0; + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + lid = NPC_LID_LH; + + switch (pst->pattern->type) { + case RTE_FLOW_ITEM_TYPE_UDP: + lt = NPC_LT_LH_TU_UDP; + info.def_mask = &rte_flow_item_udp_mask; + info.len = sizeof(struct rte_flow_item_udp); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + lt = NPC_LT_LH_TU_TCP; + info.def_mask = &rte_flow_item_tcp_mask; + info.len = sizeof(struct rte_flow_item_tcp); + break; + case RTE_FLOW_ITEM_TYPE_SCTP: + lt = NPC_LT_LH_TU_SCTP; + info.def_mask = &rte_flow_item_sctp_mask; + info.len = sizeof(struct rte_flow_item_sctp); + break; + case RTE_FLOW_ITEM_TYPE_ESP: + lt = NPC_LT_LH_TU_ESP; + info.def_mask = &rte_flow_item_esp_mask; + info.len = sizeof(struct rte_flow_item_esp); + break; + default: + return 0; + } + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} + +/* Tunnel+IPv4, Tunnel+IPv6 */ +int +otx2_flow_parse_lg(struct otx2_parse_state *pst) +{ + struct otx2_flow_item_info info; + char hw_mask[64]; + int lid, lt; + int rc; + + if (!pst->tunnel) + return 0; + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + lid = NPC_LID_LG; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) { + lt = NPC_LT_LG_TU_IP; + info.def_mask = &rte_flow_item_ipv4_mask; + info.len = sizeof(struct rte_flow_item_ipv4); + } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) { + lt = NPC_LT_LG_TU_IP6; + info.def_mask = &rte_flow_item_ipv6_mask; + info.len = sizeof(struct rte_flow_item_ipv6); + } else { + /* There is no tunneled IP header */ + return 0; + } + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} + +/* Tunnel+Ether */ +int +otx2_flow_parse_lf(struct otx2_parse_state *pst) +{ + const struct rte_flow_item *pattern, *last_pattern; + struct rte_flow_item_eth hw_mask; + struct otx2_flow_item_info info; + int lid, lt, lflags; + int nr_vlans = 0; + int rc; + + /* We hit this layer if there is a tunneling protocol */ + if (!pst->tunnel) + return 0; + + if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH) + return 0; + + lid = NPC_LID_LF; + lt = NPC_LT_LF_TU_ETHER; + lflags = 0; + + info.def_mask = &rte_flow_item_vlan_mask; + /* No match support for vlan tags */ + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_vlan); + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + + /* Look ahead and find out any VLAN tags. These can be + * detected but no data matching is available. + */ + last_pattern = pst->pattern; + pattern = pst->pattern + 1; + pattern = otx2_flow_skip_void_and_any_items(pattern); + while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + nr_vlans++; + rc = otx2_flow_parse_item_basic(pattern, &info, pst->error); + if (rc != 0) + return rc; + last_pattern = pattern; + pattern++; + pattern = otx2_flow_skip_void_and_any_items(pattern); + } + otx2_npc_dbg("Nr_vlans = %d", nr_vlans); + switch (nr_vlans) { + case 0: + break; + case 1: + lflags = NPC_F_TU_ETHER_CTAG; + break; + case 2: + lflags = NPC_F_TU_ETHER_STAG_CTAG; + break; + default: + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + last_pattern, + "more than 2 vlans with tunneled Ethernet " + "not supported"); + return -rte_errno; + } + + info.def_mask = &rte_flow_item_eth_mask; + info.hw_mask = &hw_mask; + info.len = sizeof(struct rte_flow_item_eth); + info.hw_hdr_len = 0; + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + pst->pattern = last_pattern; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +int +otx2_flow_parse_le(struct otx2_parse_state *pst) +{ + /* + * We are positioned at UDP. Scan ahead and look for + * UDP encapsulated tunnel protocols. If available, + * parse them. In that case handle this: + * - RTE spec assumes we point to tunnel header. + * - NPC parser provides offset from UDP header. + */ + + /* + * Note: Add support to GENEVE, VXLAN_GPE when we + * upgrade DPDK + * + * Note: Better to split flags into two nibbles: + * - Higher nibble can have flags + * - Lower nibble to further enumerate protocols + * and have flags based extraction + */ + const struct rte_flow_item *pattern = pst->pattern; + struct otx2_flow_item_info info; + int lid, lt, lflags; + char hw_mask[64]; + int rc; + + if (pst->tunnel) + return 0; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) + return otx2_flow_parse_mpls(pst, NPC_LID_LE); + + info.spec = NULL; + info.mask = NULL; + info.hw_mask = NULL; + info.def_mask = NULL; + info.len = 0; + info.hw_hdr_len = 0; + lid = NPC_LID_LE; + lflags = 0; + + /* Ensure we are not matching anything in UDP */ + rc = otx2_flow_parse_item_basic(pattern, &info, pst->error); + if (rc) + return rc; + + info.hw_mask = &hw_mask; + pattern = otx2_flow_skip_void_and_any_items(pattern); + otx2_npc_dbg("Pattern->type = %d", pattern->type); + switch (pattern->type) { + case RTE_FLOW_ITEM_TYPE_VXLAN: + lflags = NPC_F_UDP_VXLAN; + info.def_mask = &rte_flow_item_vxlan_mask; + info.len = sizeof(struct rte_flow_item_vxlan); + lt = NPC_LT_LE_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_GTPC: + lflags = NPC_F_UDP_GTP_GTPC; + info.def_mask = &rte_flow_item_gtp_mask; + info.len = sizeof(struct rte_flow_item_gtp); + lt = NPC_LT_LE_GTPC; + break; + case RTE_FLOW_ITEM_TYPE_GTPU: + lflags = NPC_F_UDP_GTP_GTPU_G_PDU; + info.def_mask = &rte_flow_item_gtp_mask; + info.len = sizeof(struct rte_flow_item_gtp); + lt = NPC_LT_LE_GTPU; + break; + default: + return 0; + } + + pst->tunnel = 1; + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +static int +flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag) +{ + int nr_labels = 0; + const struct rte_flow_item *pattern = pst->pattern; + struct otx2_flow_item_info info; + int rc; + uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS, + NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS}; + + /* + * pst->pattern points to first MPLS label. We only check + * that subsequent labels do not have anything to match. + */ + info.def_mask = &rte_flow_item_mpls_mask; + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_mpls); + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + + while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) { + nr_labels++; + + /* Basic validation of 2nd/3rd/4th mpls item */ + if (nr_labels > 1) { + rc = otx2_flow_parse_item_basic(pattern, &info, + pst->error); + if (rc != 0) + return rc; + } + pst->last_pattern = pattern; + pattern++; + pattern = otx2_flow_skip_void_and_any_items(pattern); + } + + if (nr_labels > 4) { + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + pst->last_pattern, + "more than 4 mpls labels not supported"); + return -rte_errno; + } + + *flag = flag_list[nr_labels - 1]; + return 0; +} + +int +otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid) +{ + /* Find number of MPLS labels */ + struct rte_flow_item_mpls hw_mask; + struct otx2_flow_item_info info; + int lt, lflags; + int rc; + + lflags = 0; + + if (lid == NPC_LID_LC) + lt = NPC_LT_LC_MPLS; + else if (lid == NPC_LID_LD) + lt = NPC_LT_LD_TU_MPLS_IN_IP; + else + lt = NPC_LT_LE_TU_MPLS_IN_UDP; + + /* Prepare for parsing the first item */ + info.def_mask = &rte_flow_item_mpls_mask; + info.hw_mask = &hw_mask; + info.len = sizeof(struct rte_flow_item_mpls); + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + /* + * Parse for more labels. + * This sets lflags and pst->last_pattern correctly. + */ + rc = flow_parse_mpls_label_stack(pst, &lflags); + if (rc != 0) + return rc; + + pst->tunnel = 1; + pst->pattern = pst->last_pattern; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +/* + * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE, + * GTP, GTPC, GTPU, ESP + * + * Note: UDP tunnel protocols are identified by flags. + * LPTR for these protocol still points to UDP + * header. Need flag based extraction to support + * this. + */ +int +otx2_flow_parse_ld(struct otx2_parse_state *pst) +{ + char hw_mask[NPC_MAX_EXTRACT_DATA_LEN]; + struct otx2_flow_item_info info; + int lid, lt, lflags; + int rc; + + if (pst->tunnel) { + /* We have already parsed MPLS or IPv4/v6 followed + * by MPLS or IPv4/v6. Subsequent TCP/UDP etc + * would be parsed as tunneled versions. Skip + * this layer, except for tunneled MPLS. If LC is + * MPLS, we have anyway skipped all stacked MPLS + * labels. + */ + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) + return otx2_flow_parse_mpls(pst, NPC_LID_LD); + return 0; + } + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.def_mask = NULL; + info.len = 0; + info.hw_hdr_len = 0; + + lid = NPC_LID_LD; + lflags = 0; + + otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type); + switch (pst->pattern->type) { + case RTE_FLOW_ITEM_TYPE_ICMP: + if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6) + lt = NPC_LT_LD_ICMP6; + else + lt = NPC_LT_LD_ICMP; + info.def_mask = &rte_flow_item_icmp_mask; + info.len = sizeof(struct rte_flow_item_icmp); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + lt = NPC_LT_LD_UDP; + info.def_mask = &rte_flow_item_udp_mask; + info.len = sizeof(struct rte_flow_item_udp); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + lt = NPC_LT_LD_TCP; + info.def_mask = &rte_flow_item_tcp_mask; + info.len = sizeof(struct rte_flow_item_tcp); + break; + case RTE_FLOW_ITEM_TYPE_SCTP: + lt = NPC_LT_LD_SCTP; + info.def_mask = &rte_flow_item_sctp_mask; + info.len = sizeof(struct rte_flow_item_sctp); + break; + case RTE_FLOW_ITEM_TYPE_ESP: + lt = NPC_LT_LD_ESP; + info.def_mask = &rte_flow_item_esp_mask; + info.len = sizeof(struct rte_flow_item_esp); + break; + case RTE_FLOW_ITEM_TYPE_GRE: + lt = NPC_LT_LD_GRE; + info.def_mask = &rte_flow_item_gre_mask; + info.len = sizeof(struct rte_flow_item_gre); + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + lt = NPC_LT_LD_GRE; + lflags = NPC_F_GRE_NVGRE; + info.def_mask = &rte_flow_item_nvgre_mask; + info.len = sizeof(struct rte_flow_item_nvgre); + /* Further IP/Ethernet are parsed as tunneled */ + pst->tunnel = 1; + break; + default: + return 0; + } + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} From patchwork Sun Jun 30 18:05:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55706 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 891E01B9E8; Sun, 30 Jun 2019 20:10:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 184B11BAC7 for ; Sun, 30 Jun 2019 20:08:30 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI50uM015649 for ; Sun, 30 Jun 2019 11:08:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=2WjK49EnpgK0cCBAUeRhN28rovgmEnFTIMWksXQpF4s=; b=p2NnRnPf4sHavn7DEayUUl7boeB1G8Cf7T8+uMUQiMi7N99TOenhG6mUV9O7bRywNoQg CREFjtN6VsM4k4yXSlDpaDU57N0Ye7a4dvn/wz2i1CRlcJMGEPtLzChGDtuuu9OVzFm9 43vZL2sujoX6igJ4LSjhBPB30UdeKwYSxHVswzNkQFdtJxzAnJdtjRb7tjTQBeDYDh6L rTGG7LD8K79T7N6e4rxaMi7eQrQVTEFDW63w5w2hndvr+QIyZtzhxtoBhMVwI/D7ldeJ c8u33EnhT7odl5OknJ/hYr+m5UB1Rv7WCE7132I4gJBa6LMF6fEMhF0e+L+eDkofI6Vp lw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yd2-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:29 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:25 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 43B023F703F; Sun, 30 Jun 2019 11:08:24 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:49 +0530 Message-ID: <20190630180609.36705-38-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 37/57] net/octeontx2: add flow actions support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding support to parse flow actions like drop, count, mark, rss, queue. On egress side, only drop and count actions were supported. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow_parse.c | 210 ++++++++++++++++++++++++ 1 file changed, 210 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c index d27a24833..79c60a9ea 100644 --- a/drivers/net/octeontx2/otx2_flow_parse.c +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -457,3 +457,213 @@ otx2_flow_parse_ld(struct otx2_parse_state *pst) return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); } + +static inline void +flow_check_lc_ip_tunnel(struct otx2_parse_state *pst) +{ + const struct rte_flow_item *pattern = pst->pattern + 1; + + pattern = otx2_flow_skip_void_and_any_items(pattern); + if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS || + pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 || + pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) + pst->tunnel = 1; +} + +/* Outer IPv4, Outer IPv6, MPLS, ARP */ +int +otx2_flow_parse_lc(struct otx2_parse_state *pst) +{ + uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN]; + struct otx2_flow_item_info info; + int lid, lt; + int rc; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) + return otx2_flow_parse_mpls(pst, NPC_LID_LC); + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = 0; + lid = NPC_LID_LC; + + switch (pst->pattern->type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + lt = NPC_LT_LC_IP; + info.def_mask = &rte_flow_item_ipv4_mask; + info.len = sizeof(struct rte_flow_item_ipv4); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + lid = NPC_LID_LC; + lt = NPC_LT_LC_IP6; + info.def_mask = &rte_flow_item_ipv6_mask; + info.len = sizeof(struct rte_flow_item_ipv6); + break; + case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4: + lt = NPC_LT_LC_ARP; + info.def_mask = &rte_flow_item_arp_eth_ipv4_mask; + info.len = sizeof(struct rte_flow_item_arp_eth_ipv4); + break; + default: + /* No match at this layer */ + return 0; + } + + /* Identify if IP tunnels MPLS or IPv4/v6 */ + flow_check_lc_ip_tunnel(pst); + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} + +/* VLAN, ETAG */ +int +otx2_flow_parse_lb(struct otx2_parse_state *pst) +{ + const struct rte_flow_item *pattern = pst->pattern; + const struct rte_flow_item *last_pattern; + char hw_mask[NPC_MAX_EXTRACT_DATA_LEN]; + struct otx2_flow_item_info info; + int lid, lt, lflags; + int nr_vlans = 0; + int rc; + + info.spec = NULL; + info.mask = NULL; + info.hw_hdr_len = NPC_TPID_LENGTH; + + lid = NPC_LID_LB; + lflags = 0; + last_pattern = pattern; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + /* RTE vlan is either 802.1q or 802.1ad, + * this maps to either CTAG/STAG. We need to decide + * based on number of VLANS present. Matching is + * supported on first tag only. + */ + info.def_mask = &rte_flow_item_vlan_mask; + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_vlan); + + pattern = pst->pattern; + while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + nr_vlans++; + + /* Basic validation of 2nd/3rd vlan item */ + if (nr_vlans > 1) { + otx2_npc_dbg("Vlans = %d", nr_vlans); + rc = otx2_flow_parse_item_basic(pattern, &info, + pst->error); + if (rc != 0) + return rc; + } + last_pattern = pattern; + pattern++; + pattern = otx2_flow_skip_void_and_any_items(pattern); + } + + switch (nr_vlans) { + case 1: + lt = NPC_LT_LB_CTAG; + break; + case 2: + lt = NPC_LT_LB_STAG; + lflags = NPC_F_STAG_CTAG; + break; + case 3: + lt = NPC_LT_LB_STAG; + lflags = NPC_F_STAG_STAG_CTAG; + break; + default: + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + last_pattern, + "more than 3 vlans not supported"); + return -rte_errno; + } + } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) { + /* we can support ETAG and match a subsequent CTAG + * without any matching support. + */ + lt = NPC_LT_LB_ETAG; + lflags = 0; + + last_pattern = pst->pattern; + pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1); + if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + info.def_mask = &rte_flow_item_vlan_mask; + /* set supported mask to NULL for vlan tag */ + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_vlan); + rc = otx2_flow_parse_item_basic(pattern, &info, + pst->error); + if (rc != 0) + return rc; + + lflags = NPC_F_ETAG_CTAG; + last_pattern = pattern; + } + + info.def_mask = &rte_flow_item_e_tag_mask; + info.len = sizeof(struct rte_flow_item_e_tag); + } else { + return 0; + } + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + /* Point pattern to last item consumed */ + pst->pattern = last_pattern; + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +int +otx2_flow_parse_la(struct otx2_parse_state *pst) +{ + struct rte_flow_item_eth hw_mask; + struct otx2_flow_item_info info; + int lid, lt; + int rc; + + /* Identify the pattern type into lid, lt */ + if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH) + return 0; + + lid = NPC_LID_LA; + lt = NPC_LT_LA_ETHER; + info.hw_hdr_len = 0; + + if (pst->flow->nix_intf == NIX_INTF_TX) { + lt = NPC_LT_LA_IH_NIX_ETHER; + info.hw_hdr_len = NPC_IH_LENGTH; + } + + /* Prepare for parsing the item */ + info.def_mask = &rte_flow_item_eth_mask; + info.hw_mask = &hw_mask; + info.len = sizeof(struct rte_flow_item_eth); + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + /* Basic validation of item parameters */ + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc) + return rc; + + /* Update pst if not validate only? clash check? */ + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} From patchwork Sun Jun 30 18:05:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55707 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BA3321B9E5; Sun, 30 Jun 2019 20:10:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id AA2CD1BABD for ; Sun, 30 Jun 2019 20:08:30 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI72jG028445 for ; Sun, 30 Jun 2019 11:08:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7aWQWWcBulOqwfI+pJBGzvHM2b8Un8UtiZS+uTWGkx8=; b=nhASB+aIx1dSs16iCdLkyKJiSdvqvm9ryjUqkT6t1xH0pfTukgsSJdkfOPuBzdGHKlvO NvTCHZkRL1Gv3exOK16H8dukby8+jsO4nvQItcYZQUSNGLkp9Ywq8Rv0JXg2ZHlBvloA bMDsYB1ajBGzOnf1aoqGOYjCPV6fWoS8zklNTFP+pgcO8xYSuERKITUxVsyAGJOn0uRQ GLWNpe/bzog9lW48c1eJf12H/EU2vBvuRefkxpkhbyu7Cs18/Br9NuuJ4EznlCk/FOcH ODYMHoTKumoC7n14NftgTstvq/fu/p3WDUHtjTRrARTpUeZpKsCYV8z8zgMa8wCIfLCz rA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gmh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:29 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:28 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:28 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 043863F7040; Sun, 30 Jun 2019 11:08:26 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:50 +0530 Message-ID: <20190630180609.36705-39-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 38/57] net/octeontx2: add flow parse actions support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding support to parse flow actions like drop, count, mark, rss, queue. On egress side, only drop and count actions were supported. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow_parse.c | 276 ++++++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 1 + 2 files changed, 277 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c index 79c60a9ea..4cf5ce17e 100644 --- a/drivers/net/octeontx2/otx2_flow_parse.c +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -667,3 +667,279 @@ otx2_flow_parse_la(struct otx2_parse_state *pst) /* Update pst if not validate only? clash check? */ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); } + +static int +parse_rss_action(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action *act, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_rss_info *rss_info = &hw->rss_info; + const struct rte_flow_action_rss *rss; + uint32_t i; + + rss = (const struct rte_flow_action_rss *)act->conf; + + /* Not supported */ + if (attr->egress) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "No support of RSS in egress"); + } + + if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "multi-queue mode is disabled"); + + /* Parse RSS related parameters from configuration */ + if (!rss || !rss->queue_num) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "no valid queues"); + + if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "non-default RSS hash functions" + " are not supported"); + + if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "RSS hash key too large"); + + if (rss->queue_num > rss_info->rss_size) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, + "too many queues for RSS context"); + + for (i = 0; i < rss->queue_num; i++) { + if (rss->queue[i] >= dev->data->nb_rx_queues) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "queue id > max number" + " of queues"); + } + + return 0; +} + +int +otx2_flow_parse_actions(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_npc_flow_info *npc = &hw->npc_flow; + const struct rte_flow_action_count *act_count; + const struct rte_flow_action_mark *act_mark; + const struct rte_flow_action_queue *act_q; + const char *errmsg = NULL; + int sel_act, req_act = 0; + uint16_t pf_func; + int errcode = 0; + int mark = 0; + int rq = 0; + + /* Initialize actions */ + flow->ctr_id = NPC_COUNTER_NONE; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + otx2_npc_dbg("Action type = %d", actions->type); + + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_MARK: + act_mark = + (const struct rte_flow_action_mark *)actions->conf; + + /* We have only 16 bits. Use highest val for flag */ + if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) { + errmsg = "mark value must be < 0xfffe"; + errcode = ENOTSUP; + goto err_exit; + } + mark = act_mark->id + 1; + req_act |= OTX2_FLOW_ACT_MARK; + rte_atomic32_inc(&npc->mark_actions); + break; + + case RTE_FLOW_ACTION_TYPE_FLAG: + mark = OTX2_FLOW_FLAG_VAL; + req_act |= OTX2_FLOW_ACT_FLAG; + rte_atomic32_inc(&npc->mark_actions); + break; + + case RTE_FLOW_ACTION_TYPE_COUNT: + act_count = + (const struct rte_flow_action_count *) + actions->conf; + + if (act_count->shared == 1) { + errmsg = "Shared Counters not supported"; + errcode = ENOTSUP; + goto err_exit; + } + /* Indicates, need a counter */ + flow->ctr_id = 1; + req_act |= OTX2_FLOW_ACT_COUNT; + break; + + case RTE_FLOW_ACTION_TYPE_DROP: + req_act |= OTX2_FLOW_ACT_DROP; + break; + + case RTE_FLOW_ACTION_TYPE_QUEUE: + /* Applicable only to ingress flow */ + act_q = (const struct rte_flow_action_queue *) + actions->conf; + rq = act_q->index; + if (rq >= dev->data->nb_rx_queues) { + errmsg = "invalid queue index"; + errcode = EINVAL; + goto err_exit; + } + req_act |= OTX2_FLOW_ACT_QUEUE; + break; + + case RTE_FLOW_ACTION_TYPE_RSS: + errcode = parse_rss_action(dev, attr, actions, error); + if (errcode) + return -rte_errno; + + req_act |= OTX2_FLOW_ACT_RSS; + break; + + case RTE_FLOW_ACTION_TYPE_SECURITY: + /* Assumes user has already configured security + * session for this flow. Associated conf is + * opaque. When RTE security is implemented for otx2, + * we need to verify that for specified security + * session: + * action_type == + * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + * session_protocol == + * RTE_SECURITY_PROTOCOL_IPSEC + * + * RSS is not supported with inline ipsec. Get the + * rq from associated conf, or make + * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this + * action. + * Currently, rq = 0 is assumed. + */ + req_act |= OTX2_FLOW_ACT_SEC; + rq = 0; + break; + default: + errmsg = "Unsupported action specified"; + errcode = ENOTSUP; + goto err_exit; + } + } + + /* Check if actions specified are compatible */ + if (attr->egress) { + /* Only DROP/COUNT is supported */ + if (!(req_act & OTX2_FLOW_ACT_DROP)) { + errmsg = "DROP is required action for egress"; + errcode = EINVAL; + goto err_exit; + } else if (req_act & ~(OTX2_FLOW_ACT_DROP | + OTX2_FLOW_ACT_COUNT)) { + errmsg = "Unsupported action specified"; + errcode = ENOTSUP; + goto err_exit; + } + flow->npc_action = NIX_TX_ACTIONOP_DROP; + goto set_pf_func; + } + + /* We have already verified the attr, this is ingress. + * - Exactly one terminating action is supported + * - Exactly one of MARK or FLAG is supported + * - If terminating action is DROP, only count is valid. + */ + sel_act = req_act & OTX2_FLOW_ACT_TERM; + if ((sel_act & (sel_act - 1)) != 0) { + errmsg = "Only one terminating action supported"; + errcode = EINVAL; + goto err_exit; + } + + if (req_act & OTX2_FLOW_ACT_DROP) { + sel_act = req_act & ~OTX2_FLOW_ACT_COUNT; + if ((sel_act & (sel_act - 1)) != 0) { + errmsg = "Only COUNT action is supported " + "with DROP ingress action"; + errcode = ENOTSUP; + goto err_exit; + } + } + + if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) + == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) { + errmsg = "Only one of FLAG or MARK action is supported"; + errcode = ENOTSUP; + goto err_exit; + } + + /* Set NIX_RX_ACTIONOP */ + if (req_act & OTX2_FLOW_ACT_DROP) { + flow->npc_action = NIX_RX_ACTIONOP_DROP; + } else if (req_act & OTX2_FLOW_ACT_QUEUE) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & OTX2_FLOW_ACT_RSS) { + /* When user added a rule for rss, first we will add the + *rule in MCAM and then update the action, once if we have + *FLOW_KEY_ALG index. So, till we update the action with + *flow_key_alg index, set the action to drop. + */ + if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) + flow->npc_action = NIX_RX_ACTIONOP_DROP; + else + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else if (req_act & OTX2_FLOW_ACT_SEC) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC; + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else if (req_act & OTX2_FLOW_ACT_COUNT) { + /* Keep OTX2_FLOW_ACT_COUNT always at the end + * This is default action, when user specify only + * COUNT ACTION + */ + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else { + /* Should never reach here */ + errmsg = "Invalid action specified"; + errcode = EINVAL; + goto err_exit; + } + + if (mark) + flow->npc_action |= (uint64_t)mark << 40; + + if (rte_atomic32_read(&npc->mark_actions) == 1) + hw->rx_offload_flags |= + NIX_RX_OFFLOAD_MARK_UPDATE_F; + +set_pf_func: + /* Ideally AF must ensure that correct pf_func is set */ + pf_func = otx2_pfvf_func(hw->pf, hw->vf); + flow->npc_action |= (uint64_t)pf_func << 4; + + return 0; + +err_exit: + rte_flow_error_set(error, errcode, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + errmsg); + return -rte_errno; +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 0c3627c12..db79451b9 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -13,6 +13,7 @@ sizeof(uint16_t)) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) #define NIX_TIMESYNC_RX_OFFSET 8 From patchwork Sun Jun 30 18:05:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55708 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 394AB1BAB6; Sun, 30 Jun 2019 20:10:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 942301BAC5 for ; Sun, 30 Jun 2019 20:08:33 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73ON028454 for ; Sun, 30 Jun 2019 11:08:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=dVT+6xF9SSLb6P0GsaiByolSXKfOZ+47ZiLV30tdqA0=; b=U4+vzovmN7Wk7SD1vciMmsrxlcuK8/WeqWfA1ajc1HCMfLI0GAv4GYgR8mEgz/Ri4PJk SE2fEhN3Z2n9sltNNfBuwt2txk0YCaXKucqrT4UL+LZx3z6NWySQKYfsYlU5NdrUnfQX 4WS9rkQqS8eE7xcnjI8qiMlnng4XwDDvuYNj4bMnZPoOFkFcDQQUeh3LGm3kT5sEsUcy /sVnDaTBu5v0cnVhDvKFCh7jHMRkv1/MKWzaC8byWtEH3imqF2p3XlKgP5OnTaiFmLEy PsaUzoYLIjUqCZhC0BVAYtPGTD7sZczORDU893GrjTnr5X/SDPVwMMOUHcqBW2yspMvq 4g== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gmr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:32 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CB7523F703F; Sun, 30 Jun 2019 11:08:29 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:51 +0530 Message-ID: <20190630180609.36705-40-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 39/57] net/octeontx2: add flow operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding the initial flow ops like flow_create and flow_validate. These will be used to alloc and write flow rule to device and validate the flow rule. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_flow.c | 451 ++++++++++++++++++++++++++++++ 3 files changed, 453 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 3eb4dba53..21559f631 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -32,6 +32,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_rss.c \ otx2_mac.c \ otx2_ptp.c \ + otx2_flow.c \ otx2_link.c \ otx2_stats.c \ otx2_lookup.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index f608c4947..f0e03bffe 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -7,6 +7,7 @@ sources = files( 'otx2_rss.c', 'otx2_mac.c', 'otx2_ptp.c', + 'otx2_flow.c', 'otx2_link.c', 'otx2_stats.c', 'otx2_lookup.c', diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c new file mode 100644 index 000000000..896aef00a --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow.c @@ -0,0 +1,451 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +static int +flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox, + struct otx2_npc_flow_info *flow_info) +{ + /* This is non-LDATA part in search key */ + uint64_t key_data[2] = {0ULL, 0ULL}; + uint64_t key_mask[2] = {0ULL, 0ULL}; + int intf = pst->flow->nix_intf; + int key_len, bit = 0, index; + int off, idx, data_off = 0; + uint8_t lid, mask, data; + uint16_t layer_info; + uint64_t lt, flags; + + + /* Skip till Layer A data start */ + while (bit < NPC_PARSE_KEX_S_LA_OFFSET) { + if (flow_info->keyx_supp_nmask[intf] & (1 << bit)) + data_off++; + bit++; + } + + /* Each bit represents 1 nibble */ + data_off *= 4; + + index = 0; + for (lid = 0; lid < NPC_MAX_LID; lid++) { + /* Offset in key */ + off = NPC_PARSE_KEX_S_LID_OFFSET(lid); + lt = pst->lt[lid] & 0xf; + flags = pst->flags[lid] & 0xff; + + /* NPC_LAYER_KEX_S */ + layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7); + + if (layer_info) { + for (idx = 0; idx <= 2 ; idx++) { + if (layer_info & (1 << idx)) { + if (idx == 2) + data = lt; + else if (idx == 1) + data = ((flags >> 4) & 0xf); + else + data = (flags & 0xf); + + if (data_off >= 64) { + data_off = 0; + index++; + } + key_data[index] |= ((uint64_t)data << + data_off); + mask = 0xf; + if (lt == 0) + mask = 0; + key_mask[index] |= ((uint64_t)mask << + data_off); + data_off += 4; + } + } + } + } + + otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64, + key_data[0], key_data[1]); + + /* Copy this into mcam string */ + key_len = (pst->npc->keyx_len[intf] + 7) / 8; + otx2_npc_dbg("Key_len = %d", key_len); + memcpy(pst->flow->mcam_data, key_data, key_len); + memcpy(pst->flow->mcam_mask, key_mask, key_len); + + otx2_npc_dbg("Final flow data"); + for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) { + otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64, + idx, pst->flow->mcam_data[idx], + idx, pst->flow->mcam_mask[idx]); + } + + /* + * Now we have mcam data and mask formatted as + * [Key_len/4 nibbles][0 or 1 nibble hole][data] + * hole is present if key_len is odd number of nibbles. + * mcam data must be split into 64 bits + 48 bits segments + * for each back W0, W1. + */ + + return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info); +} + +static int +flow_parse_attr(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr, + struct rte_flow_error *error, + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + const char *errmsg = NULL; + + if (attr == NULL) + errmsg = "Attribute can't be empty"; + else if (attr->group) + errmsg = "Groups are not supported"; + else if (attr->priority >= dev->npc_flow.flow_max_priority) + errmsg = "Priority should be with in specified range"; + else if ((!attr->egress && !attr->ingress) || + (attr->egress && attr->ingress)) + errmsg = "Exactly one of ingress or egress must be set"; + + if (errmsg != NULL) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR, + attr, errmsg); + return -ENOTSUP; + } + + if (attr->ingress) + flow->nix_intf = OTX2_INTF_RX; + else + flow->nix_intf = OTX2_INTF_TX; + + flow->priority = attr->priority; + return 0; +} + +static inline int +flow_get_free_rss_grp(struct rte_bitmap *bmap, + uint32_t size, uint32_t *pos) +{ + for (*pos = 0; *pos < size; ++*pos) { + if (!rte_bitmap_get(bmap, *pos)) + break; + } + + return *pos < size ? 0 : -1; +} + +static int +flow_configure_rss_action(struct otx2_eth_dev *dev, + const struct rte_flow_action_rss *rss, + uint8_t *alg_idx, uint32_t *rss_grp, + int mcam_index) +{ + struct otx2_npc_flow_info *flow_info = &dev->npc_flow; + uint16_t reta[NIX_RSS_RETA_SIZE_MAX]; + uint32_t flowkey_cfg, grp_aval, i; + uint16_t *ind_tbl = NULL; + uint8_t flowkey_algx; + int rc; + + rc = flow_get_free_rss_grp(flow_info->rss_grp_entries, + flow_info->rss_grps, &grp_aval); + /* RSS group :0 is not usable for flow rss action */ + if (rc < 0 || grp_aval == 0) + return -ENOSPC; + + *rss_grp = grp_aval; + + otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key, + rss->key_len); + + /* If queue count passed in the rss action is less than + * HW configured reta size, replicate rss action reta + * across HW reta table. + */ + if (dev->rss_info.rss_size > rss->queue_num) { + ind_tbl = reta; + + for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++) + memcpy(reta + i * rss->queue_num, rss->queue, + sizeof(uint16_t) * rss->queue_num); + + i = dev->rss_info.rss_size % rss->queue_num; + if (i) + memcpy(&reta[dev->rss_info.rss_size] - i, + rss->queue, i * sizeof(uint16_t)); + } else { + ind_tbl = (uint16_t *)(uintptr_t)rss->queue; + } + + rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl); + if (rc) { + otx2_err("Failed to init rss table rc = %d", rc); + return rc; + } + + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx, + *rss_grp, mcam_index); + if (rc) { + otx2_err("Failed to set rss hash function rc = %d", rc); + return rc; + } + + *alg_idx = flowkey_algx; + + rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp); + + return 0; +} + + +static int +flow_program_rss_action(struct rte_eth_dev *eth_dev, + const struct rte_flow_action actions[], + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + const struct rte_flow_action_rss *rss; + uint32_t rss_grp; + uint8_t alg_idx; + int rc; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) { + rss = (const struct rte_flow_action_rss *)actions->conf; + + rc = flow_configure_rss_action(dev, + rss, &alg_idx, &rss_grp, + flow->mcam_id); + if (rc) + return rc; + + flow->npc_action |= + ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) << + NIX_RSS_ACT_ALG_OFFSET) | + ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) << + NIX_RSS_ACT_GRP_OFFSET); + } + } + return 0; +} + +static int +flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst) +{ + otx2_npc_dbg("Meta Item"); + return 0; +} + +/* + * Parse function of each layer: + * - Consume one or more patterns that are relevant. + * - Update parse_state + * - Set parse_state.pattern = last item consumed + * - Set appropriate error code/message when returning error. + */ +typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst); + +static int +flow_parse_pattern(struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + struct rte_flow_error *error, + struct rte_flow *flow, + struct otx2_parse_state *pst) +{ + flow_parse_stage_func_t parse_stage_funcs[] = { + flow_parse_meta_items, + otx2_flow_parse_la, + otx2_flow_parse_lb, + otx2_flow_parse_lc, + otx2_flow_parse_ld, + otx2_flow_parse_le, + otx2_flow_parse_lf, + otx2_flow_parse_lg, + otx2_flow_parse_lh, + }; + struct otx2_eth_dev *hw = dev->data->dev_private; + uint8_t layer = 0; + int key_offset; + int rc; + + if (pattern == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL, + "pattern is NULL"); + return -EINVAL; + } + + memset(pst, 0, sizeof(*pst)); + pst->npc = &hw->npc_flow; + pst->error = error; + pst->flow = flow; + + /* Use integral byte offset */ + key_offset = pst->npc->keyx_len[flow->nix_intf]; + key_offset = (key_offset + 7) / 8; + + /* Location where LDATA would begin */ + pst->mcam_data = (uint8_t *)flow->mcam_data; + pst->mcam_mask = (uint8_t *)flow->mcam_mask; + + while (pattern->type != RTE_FLOW_ITEM_TYPE_END && + layer < RTE_DIM(parse_stage_funcs)) { + otx2_npc_dbg("Pattern type = %d", pattern->type); + + /* Skip place-holders */ + pattern = otx2_flow_skip_void_and_any_items(pattern); + + pst->pattern = pattern; + otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer); + rc = parse_stage_funcs[layer](pst); + if (rc != 0) + return -rte_errno; + + layer++; + + /* + * Parse stage function sets pst->pattern to + * 1 past the last item it consumed. + */ + pattern = pst->pattern; + + if (pst->terminate) + break; + } + + /* Skip trailing place-holders */ + pattern = otx2_flow_skip_void_and_any_items(pattern); + + /* Are there more items than what we can handle? */ + if (pattern->type != RTE_FLOW_ITEM_TYPE_END) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, pattern, + "unsupported item in the sequence"); + return -ENOTSUP; + } + + return 0; +} + +static int +flow_parse_rule(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow, + struct otx2_parse_state *pst) +{ + int err; + + /* Check attributes */ + err = flow_parse_attr(dev, attr, error, flow); + if (err) + return err; + + /* Check actions */ + err = otx2_flow_parse_actions(dev, attr, actions, error, flow); + if (err) + return err; + + /* Check pattern */ + err = flow_parse_pattern(dev, pattern, error, flow, pst); + if (err) + return err; + + /* Check for overlaps? */ + return 0; +} + +static int +otx2_flow_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct otx2_parse_state parse_state; + struct rte_flow flow; + + memset(&flow, 0, sizeof(flow)); + return flow_parse_rule(dev, attr, pattern, actions, error, &flow, + &parse_state); +} + +static struct rte_flow * +otx2_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_parse_state parse_state; + struct otx2_mbox *mbox = hw->mbox; + struct rte_flow *flow, *flow_iter; + struct otx2_flow_list *list; + int rc; + + flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0); + if (flow == NULL) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Memory allocation failed"); + return NULL; + } + memset(flow, 0, sizeof(*flow)); + + rc = flow_parse_rule(dev, attr, pattern, actions, error, flow, + &parse_state); + if (rc != 0) + goto err_exit; + + rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to insert filter"); + goto err_exit; + } + + rc = flow_program_rss_action(dev, actions, flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to program rss action"); + goto err_exit; + } + + + list = &hw->npc_flow.flow_list[flow->priority]; + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + return flow; + } + } + + TAILQ_INSERT_TAIL(list, flow, next); + return flow; + +err_exit: + rte_free(flow); + return NULL; +} + +const struct rte_flow_ops otx2_flow_ops = { + .validate = otx2_flow_validate, + .create = otx2_flow_create, +}; From patchwork Sun Jun 30 18:05:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55709 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B30AC1BB32; Sun, 30 Jun 2019 20:10:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4C6671B9D1 for ; Sun, 30 Jun 2019 20:08:36 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6dmQ028335 for ; Sun, 30 Jun 2019 11:08:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=WFL34scTm1lXgrJ6lmypqTR81NhenUUIETPBJOpbuhA=; b=jlfl4dMk+9OuhHMnWi7W5gQIrYEyF+TKsBEJzcF6CNu0AK8MgNpcsfH03FJeApkl+rWP TNu0V8NcmvbxduhD3AQi5ukZHWN0HHltYLxv5+/RcuxyF6suZXwDwiyXwBaybfGnS14/ 9Mq4ZdQEu1Iw8OduWjJ3NOUi5uB3rWFZ8z/CdoarMkf8oU5WFnE2y2fVa9FF0lWPoaLl MkbaU7t2ZIjwRRZBG8rlGecCPvPzFhzlZblQ25b5ZJRWBLq7WqhuwITdYWtKYkO+uRSp 9qole7HreoaxzjccPbDsh7RfrhjAZ/hR2qj3RqKD0OI0UPJIxKNOY9FBZoAdr7F4YkRA +w== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gms-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:35 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:34 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:34 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B4FC63F703F; Sun, 30 Jun 2019 11:08:32 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:52 +0530 Message-ID: <20190630180609.36705-41-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 40/57] net/octeontx2: add flow destroy ops support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding few more flow operations like flow_destroy, flow_isolate and flow_flush. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.c | 206 ++++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 3 + 2 files changed, 209 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 896aef00a..24bde623d 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -5,6 +5,48 @@ #include "otx2_ethdev.h" #include "otx2_flow.h" +int +otx2_flow_free_all_resources(struct otx2_eth_dev *hw) +{ + struct otx2_npc_flow_info *npc = &hw->npc_flow; + struct otx2_mbox *mbox = hw->mbox; + struct otx2_mcam_ents_info *info; + struct rte_bitmap *bmap; + struct rte_flow *flow; + int entry_count = 0; + int rc, idx; + + for (idx = 0; idx < npc->flow_max_priority; idx++) { + info = &npc->flow_entry_info[idx]; + entry_count += info->live_ent; + } + + if (entry_count == 0) + return 0; + + /* Free all MCAM entries allocated */ + rc = otx2_flow_mcam_free_all_entries(mbox); + + /* Free any MCAM counters and delete flow list */ + for (idx = 0; idx < npc->flow_max_priority; idx++) { + while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) { + if (flow->ctr_id != NPC_COUNTER_NONE) + rc |= otx2_flow_mcam_free_counter(mbox, + flow->ctr_id); + + TAILQ_REMOVE(&npc->flow_list[idx], flow, next); + rte_free(flow); + bmap = npc->live_entries[flow->priority]; + rte_bitmap_clear(bmap, flow->mcam_id); + } + info = &npc->flow_entry_info[idx]; + info->free_ent = 0; + info->live_ent = 0; + } + return rc; +} + + static int flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox, struct otx2_npc_flow_info *flow_info) @@ -237,6 +279,27 @@ flow_program_rss_action(struct rte_eth_dev *eth_dev, return 0; } +static int +flow_free_rss_action(struct rte_eth_dev *eth_dev, + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + struct otx2_npc_flow_info *npc = &dev->npc_flow; + uint32_t rss_grp; + + if (flow->npc_action & NIX_RX_ACTIONOP_RSS) { + rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) & + NIX_RSS_ACT_GRP_MASK; + if (rss_grp == 0 || rss_grp >= npc->rss_grps) + return -EINVAL; + + rte_bitmap_clear(npc->rss_grp_entries, rss_grp); + } + + return 0; +} + + static int flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst) { @@ -445,7 +508,150 @@ otx2_flow_create(struct rte_eth_dev *dev, return NULL; } +static int +otx2_flow_destroy(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_npc_flow_info *npc = &hw->npc_flow; + struct otx2_mbox *mbox = hw->mbox; + struct rte_bitmap *bmap; + uint16_t match_id; + int rc; + + match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) & + NIX_RX_ACT_MATCH_MASK; + + if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) { + if (rte_atomic32_read(&npc->mark_actions) == 0) + return -EINVAL; + + /* Clear mark offload flag if there are no more mark actions */ + if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) + hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F; + } + + rc = flow_free_rss_action(dev, flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to free rss action"); + } + + rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to destroy filter"); + } + + TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next); + + bmap = npc->live_entries[flow->priority]; + rte_bitmap_clear(bmap, flow->mcam_id); + + rte_free(flow); + return 0; +} + +static int +otx2_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + int rc; + + rc = otx2_flow_free_all_resources(hw); + if (rc) { + otx2_err("Error when deleting NPC MCAM entries " + ", counters"); + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to flush filter"); + return -rte_errno; + } + + return 0; +} + +static int +otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused, + int enable __rte_unused, + struct rte_flow_error *error) +{ + /* + * If we support, we need to un-install the default mcam + * entry for this port. + */ + + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Flow isolation not supported"); + + return -rte_errno; +} + +static int +otx2_flow_query(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_action *action, + void *data, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct rte_flow_query_count *query = data; + struct otx2_mbox *mbox = hw->mbox; + const char *errmsg = NULL; + int errcode = ENOTSUP; + int rc; + + if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) { + errmsg = "Only COUNT is supported in query"; + goto err_exit; + } + + if (flow->ctr_id == NPC_COUNTER_NONE) { + errmsg = "Counter is not available"; + goto err_exit; + } + + rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits); + if (rc != 0) { + errcode = EIO; + errmsg = "Error reading flow counter"; + goto err_exit; + } + query->hits_set = 1; + query->bytes_set = 0; + + if (query->reset) + rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id); + if (rc != 0) { + errcode = EIO; + errmsg = "Error clearing flow counter"; + goto err_exit; + } + + return 0; + +err_exit: + rte_flow_error_set(error, errcode, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + errmsg); + return -rte_errno; +} + const struct rte_flow_ops otx2_flow_ops = { .validate = otx2_flow_validate, .create = otx2_flow_create, + .destroy = otx2_flow_destroy, + .flush = otx2_flow_flush, + .query = otx2_flow_query, + .isolate = otx2_flow_isolate, }; diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index db79451b9..e18e04658 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -5,6 +5,9 @@ #ifndef __OTX2_RX_H__ #define __OTX2_RX_H__ +/* Default mark value used when none is provided. */ +#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff + #define PTYPE_WIDTH 12 #define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) #define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) From patchwork Sun Jun 30 18:05:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55710 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6DA151BB71; Sun, 30 Jun 2019 20:10:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EB85A1BACC for ; Sun, 30 Jun 2019 20:08:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6of3016863 for ; Sun, 30 Jun 2019 11:08:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=yzUiCTKHp5ob/Ke/Qv/kU/mA2qHPyURFTMwYTfq2vMk=; b=AHX0xCgaUU/ynFX+zgwtUZr/r4dPDFRMMLOOWoY8eYnS0bj31D8sKn9zDPwthXVUcZiJ S/UtcFZCvnjSL6h7Gz51+p8e1LbCcmjoFbfvYWOea/f3BpLcNu9atPUR+8Pl7D6NSEhf e0i3zPp1kSUz5upY6/3bUH41+aBtchnTHIUu+kfBKONIclK98VecJcYHOmyhMU+9JsZe 7pqIpGUNS8C+hmGAC5GFkar8RKjtrKS0Us2cr/uv/vV0CrX0ggfeE0Assn5Hgv6xo+bt QfGMtShNz40MKNeiV1peQtkYtcg05EhM0qvnfppx6Fu9Yvxk+PAy6d9SCY7OqScRUcuf kA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3ydp-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:38 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:37 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:37 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 78E753F703F; Sun, 30 Jun 2019 11:08:35 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:53 +0530 Message-ID: <20190630180609.36705-42-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 41/57] net/octeontx2: add flow init and fini X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding the flow init and fini functionality. These will be called from dev init and will initialize and de-initialize the flow related memory. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.c | 315 ++++++++++++++++++++++++++++++ 1 file changed, 315 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 24bde623d..94bd85161 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -655,3 +655,318 @@ const struct rte_flow_ops otx2_flow_ops = { .query = otx2_flow_query, .isolate = otx2_flow_isolate, }; + +static int +flow_supp_key_len(uint32_t supp_mask) +{ + int nib_count = 0; + while (supp_mask) { + nib_count++; + supp_mask &= (supp_mask - 1); + } + return nib_count * 4; +} + +/* Refer HRM register: + * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG + * and + * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG + **/ +#define BYTESM1_SHIFT 16 +#define HDR_OFF_SHIFT 8 +static void +flow_update_kex_info(struct npc_xtract_info *xtract_info, + uint64_t val) +{ + xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1; + xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff; + xtract_info->key_off = val & 0x3f; + xtract_info->enable = ((val >> 7) & 0x1); +} + +static void +flow_process_mkex_cfg(struct otx2_npc_flow_info *npc, + struct npc_get_kex_cfg_rsp *kex_rsp) +{ + volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT] + [NPC_MAX_LD]; + struct npc_xtract_info *x_info = NULL; + int lid, lt, ld, fl, ix; + otx2_dxcfg_t *p; + uint64_t keyw; + uint64_t val; + + npc->keyx_supp_nmask[NPC_MCAM_RX] = + kex_rsp->rx_keyx_cfg & 0x7fffffffULL; + npc->keyx_supp_nmask[NPC_MCAM_TX] = + kex_rsp->tx_keyx_cfg & 0x7fffffffULL; + npc->keyx_len[NPC_MCAM_RX] = + flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); + npc->keyx_len[NPC_MCAM_TX] = + flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); + + keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL; + npc->keyw[NPC_MCAM_RX] = keyw; + keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL; + npc->keyw[NPC_MCAM_TX] = keyw; + + /* Update KEX_LD_FLAG */ + for (ix = 0; ix < NPC_MAX_INTF; ix++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + for (fl = 0; fl < NPC_MAX_LFL; fl++) { + x_info = + &npc->prx_fxcfg[ix][ld][fl].xtract[0]; + val = kex_rsp->intf_ld_flags[ix][ld][fl]; + flow_update_kex_info(x_info, val); + } + } + } + + /* Update LID, LT and LDATA cfg */ + p = &npc->prx_dxcfg; + q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]) + (&kex_rsp->intf_lid_lt_ld); + for (ix = 0; ix < NPC_MAX_INTF; ix++) { + for (lid = 0; lid < NPC_MAX_LID; lid++) { + for (lt = 0; lt < NPC_MAX_LT; lt++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + x_info = &(*p)[ix][lid][lt].xtract[ld]; + val = (*q)[ix][lid][lt][ld]; + flow_update_kex_info(x_info, val); + } + } + } + } + /* Update LDATA Flags cfg */ + npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0]; + npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1]; +} + +static struct otx2_idev_kex_cfg * +flow_intra_dev_kex_cfg(void) +{ + static const char name[] = "octeontx2_intra_device_kex_conf"; + struct otx2_idev_kex_cfg *idev; + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(name); + if (mz) + return mz->addr; + + /* Request for the first time */ + mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg), + SOCKET_ID_ANY, 0, OTX2_ALIGN); + if (mz) { + idev = mz->addr; + rte_atomic16_set(&idev->kex_refcnt, 0); + return idev; + } + return NULL; +} + +static int +flow_fetch_kex_cfg(struct otx2_eth_dev *dev) +{ + struct otx2_npc_flow_info *npc = &dev->npc_flow; + struct npc_get_kex_cfg_rsp *kex_rsp; + struct otx2_mbox *mbox = dev->mbox; + struct otx2_idev_kex_cfg *idev; + int rc = 0; + + idev = flow_intra_dev_kex_cfg(); + if (!idev) + return -ENOMEM; + + /* Is kex_cfg read by any another driver? */ + if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) { + /* Call mailbox to get key & data size */ + (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox); + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp); + if (rc) { + otx2_err("Failed to fetch NPC keyx config"); + goto done; + } + memcpy(&idev->kex_cfg, kex_rsp, + sizeof(struct npc_get_kex_cfg_rsp)); + } + + flow_process_mkex_cfg(npc, &idev->kex_cfg); + +done: + return rc; +} + +int +otx2_flow_init(struct otx2_eth_dev *hw) +{ + uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL; + struct otx2_npc_flow_info *npc = &hw->npc_flow; + uint32_t bmap_sz; + int rc = 0, idx; + + rc = flow_fetch_kex_cfg(hw); + if (rc) { + otx2_err("Failed to fetch NPC keyx config from idev"); + return rc; + } + + rte_atomic32_init(&npc->mark_actions); + + npc->mcam_entries = NPC_MCAM_TOT_ENTRIES >> npc->keyw[NPC_MCAM_RX]; + /* Free, free_rev, live and live_rev entries */ + bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries); + mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority, + RTE_CACHE_LINE_SIZE); + if (mem == NULL) { + otx2_err("Bmap alloc failed"); + rc = -ENOMEM; + return rc; + } + + npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct otx2_mcam_ents_info), + 0); + if (npc->flow_entry_info == NULL) { + otx2_err("flow_entry_info alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->free_entries == NULL) { + otx2_err("free_entries alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->free_entries_rev == NULL) { + otx2_err("free_entries_rev alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->live_entries == NULL) { + otx2_err("live_entries alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->live_entries_rev == NULL) { + otx2_err("live_entries_rev alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct otx2_flow_list), + 0); + if (npc->flow_list == NULL) { + otx2_err("flow_list alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc_mem = mem; + for (idx = 0; idx < npc->flow_max_priority; idx++) { + TAILQ_INIT(&npc->flow_list[idx]); + + npc->free_entries[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->free_entries_rev[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->live_entries[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->live_entries_rev[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->flow_entry_info[idx].free_ent = 0; + npc->flow_entry_info[idx].live_ent = 0; + npc->flow_entry_info[idx].max_id = 0; + npc->flow_entry_info[idx].min_id = ~(0); + } + + npc->rss_grps = NIX_RSS_GRPS; + + bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps); + nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE); + if (nix_mem == NULL) { + otx2_err("Bmap alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz); + + /* Group 0 will be used for RSS, + * 1 -7 will be used for rte_flow RSS action + */ + rte_bitmap_set(npc->rss_grp_entries, 0); + + return 0; + +err: + if (npc->flow_list) + rte_free(npc->flow_list); + if (npc->live_entries_rev) + rte_free(npc->live_entries_rev); + if (npc->live_entries) + rte_free(npc->live_entries); + if (npc->free_entries_rev) + rte_free(npc->free_entries_rev); + if (npc->free_entries) + rte_free(npc->free_entries); + if (npc->flow_entry_info) + rte_free(npc->flow_entry_info); + if (npc_mem) + rte_free(npc_mem); + if (nix_mem) + rte_free(nix_mem); + return rc; +} + +int +otx2_flow_fini(struct otx2_eth_dev *hw) +{ + struct otx2_npc_flow_info *npc = &hw->npc_flow; + int rc; + + rc = otx2_flow_free_all_resources(hw); + if (rc) { + otx2_err("Error when deleting NPC MCAM entries, counters"); + return rc; + } + + if (npc->flow_list) + rte_free(npc->flow_list); + if (npc->live_entries_rev) + rte_free(npc->live_entries_rev); + if (npc->live_entries) + rte_free(npc->live_entries); + if (npc->free_entries_rev) + rte_free(npc->free_entries_rev); + if (npc->free_entries) + rte_free(npc->free_entries); + if (npc->flow_entry_info) + rte_free(npc->flow_entry_info); + + return 0; +} From patchwork Sun Jun 30 18:05:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55711 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 534531BB7F; Sun, 30 Jun 2019 20:10:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 359C81BAC5 for ; Sun, 30 Jun 2019 20:08:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5FMf027215; Sun, 30 Jun 2019 11:08:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ZYZiXgs/x7ZfYuVI+IGw071WNcZq946F/D41V2xttx0=; b=aZgt0Ad4+wrO1UbqXcS3ZJq7C79tCs7kcZn4DFBnwgINYy0Yg8U0UV9rSW2AoFH2qxJs NEofKIVlLexQS5XqnLC6gh3DW6a5xeNGAdVyPY0DLPa8XMxHLhXPrtkKwVKywebPjCDi qQG81L7m2qYUcYNY4htxhHLDN94lBidB+0kA0Y16S0DQFw1uRPJOYSNT0cXzZaa07krP wOLDEhNxRw5M81UPrXe4sxKlSY8Har0i8KwBcL9Cl3D+0yqXhaqAjY/Ort6kEu+DYy7V z4uQ+QOc2011YoDhJDsSytrB9ko0Dkovf26xABuExzHO8wjdv4ZvYSFFl0dI0kT/CUCa aQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gn2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:41 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:40 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2FE903F703F; Sun, 30 Jun 2019 11:08:37 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:54 +0530 Message-ID: <20190630180609.36705-43-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 42/57] net/octeontx2: connect flow API to ethdev ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Connect rte_flow driver ops to ethdev via .filter_ctrl op. Signed-off-by: Vivek Sharma Signed-off-by: Kiran Kumar K --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + doc/guides/nics/octeontx2.rst | 93 ++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.c | 9 +++ drivers/net/octeontx2/otx2_ethdev.h | 3 + drivers/net/octeontx2/otx2_ethdev_ops.c | 21 +++++ 7 files changed, 129 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 46fb00be6..33d2f2785 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -22,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow control = Y +Flow API = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index f3f812804..980a4daf9 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -22,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow control = Y +Flow API = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 7fba7e1d9..330534a90 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -17,6 +17,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Flow API = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 41eb3c7b9..0f1756932 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -23,6 +23,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Multiple queues for TX and RX - Receiver Side Scaling (RSS) - MAC filtering +- Generic flow API - Port hardware statistics - Link state information - Link flow control @@ -109,3 +110,95 @@ Runtime Config Options Above devarg parameters are configurable per device, user needs to pass the parameters to all the PCIe devices if application requires to configure on all the ethdev ports. + +RTE Flow Support +---------------- + +The OCTEON TX2 SoC family NIC has support for the following patterns and +actions. + +Patterns: + +.. _table_octeontx2_supported_flow_item_types: + +.. table:: Item types + + +----+--------------------------------+ + | # | Pattern Type | + +====+================================+ + | 1 | RTE_FLOW_ITEM_TYPE_ETH | + +----+--------------------------------+ + | 2 | RTE_FLOW_ITEM_TYPE_VLAN | + +----+--------------------------------+ + | 3 | RTE_FLOW_ITEM_TYPE_E_TAG | + +----+--------------------------------+ + | 4 | RTE_FLOW_ITEM_TYPE_IPV4 | + +----+--------------------------------+ + | 5 | RTE_FLOW_ITEM_TYPE_IPV6 | + +----+--------------------------------+ + | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4| + +----+--------------------------------+ + | 7 | RTE_FLOW_ITEM_TYPE_MPLS | + +----+--------------------------------+ + | 8 | RTE_FLOW_ITEM_TYPE_ICMP | + +----+--------------------------------+ + | 9 | RTE_FLOW_ITEM_TYPE_UDP | + +----+--------------------------------+ + | 10 | RTE_FLOW_ITEM_TYPE_TCP | + +----+--------------------------------+ + | 11 | RTE_FLOW_ITEM_TYPE_SCTP | + +----+--------------------------------+ + | 12 | RTE_FLOW_ITEM_TYPE_ESP | + +----+--------------------------------+ + | 13 | RTE_FLOW_ITEM_TYPE_GRE | + +----+--------------------------------+ + | 14 | RTE_FLOW_ITEM_TYPE_NVGRE | + +----+--------------------------------+ + | 15 | RTE_FLOW_ITEM_TYPE_VXLAN | + +----+--------------------------------+ + | 16 | RTE_FLOW_ITEM_TYPE_GTPC | + +----+--------------------------------+ + | 17 | RTE_FLOW_ITEM_TYPE_GTPU | + +----+--------------------------------+ + | 18 | RTE_FLOW_ITEM_TYPE_VOID | + +----+--------------------------------+ + | 19 | RTE_FLOW_ITEM_TYPE_ANY | + +----+--------------------------------+ + +Actions: + +.. _table_octeontx2_supported_ingress_action_types: + +.. table:: Ingress action types + + +----+--------------------------------+ + | # | Action Type | + +====+================================+ + | 1 | RTE_FLOW_ACTION_TYPE_VOID | + +----+--------------------------------+ + | 2 | RTE_FLOW_ACTION_TYPE_MARK | + +----+--------------------------------+ + | 3 | RTE_FLOW_ACTION_TYPE_FLAG | + +----+--------------------------------+ + | 4 | RTE_FLOW_ACTION_TYPE_COUNT | + +----+--------------------------------+ + | 5 | RTE_FLOW_ACTION_TYPE_DROP | + +----+--------------------------------+ + | 6 | RTE_FLOW_ACTION_TYPE_QUEUE | + +----+--------------------------------+ + | 7 | RTE_FLOW_ACTION_TYPE_RSS | + +----+--------------------------------+ + | 8 | RTE_FLOW_ACTION_TYPE_SECURITY | + +----+--------------------------------+ + +.. _table_octeontx2_supported_egress_action_types: + +.. table:: Egress action types + + +----+--------------------------------+ + | # | Action Type | + +====+================================+ + | 1 | RTE_FLOW_ACTION_TYPE_COUNT | + +----+--------------------------------+ + | 2 | RTE_FLOW_ACTION_TYPE_DROP | + +----+--------------------------------+ diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 7512aacb3..09201fd23 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1326,6 +1326,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .rx_descriptor_status = otx2_nix_rx_descriptor_status, .tx_done_cleanup = otx2_nix_tx_done_cleanup, .pool_ops_supported = otx2_nix_pool_ops_supported, + .filter_ctrl = otx2_nix_dev_filter_ctrl, .get_module_info = otx2_nix_get_module_info, .get_module_eeprom = otx2_nix_get_module_eeprom, .flow_ctrl_get = otx2_nix_flow_ctrl_get, @@ -1505,6 +1506,11 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL; } + /* Initialize rte-flow */ + rc = otx2_flow_init(dev); + if (rc) + goto free_mac_addrs; + otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64 " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64, eth_dev->data->port_id, dev->pf, dev->vf, @@ -1541,6 +1547,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Disable other rte_flow entries */ + otx2_flow_fini(dev); + /* Disable PTP if already enabled */ if (otx2_ethdev_is_ptp_en(dev)) otx2_nix_timesync_disable(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index e8a22b6ec..ad12f2553 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -294,6 +294,9 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) /* Ops */ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, void *arg); int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, struct rte_eth_dev_module_info *modinfo); int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 2a949439a..e55acd4e0 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -220,6 +220,27 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) return -ENOTSUP; } +int +otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, void *arg) +{ + RTE_SET_USED(eth_dev); + + if (filter_type != RTE_ETH_FILTER_GENERIC) { + otx2_err("Unsupported filter type %d", filter_type); + return -ENOTSUP; + } + + if (filter_op == RTE_ETH_FILTER_GET) { + *(const void **)arg = &otx2_flow_ops; + return 0; + } + + otx2_err("Invalid filter_op %d", filter_op); + return -EINVAL; +} + static struct cgx_fw_data * nix_get_fwdata(struct otx2_eth_dev *dev) { From patchwork Sun Jun 30 18:05:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55712 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9533D1BB9A; Sun, 30 Jun 2019 20:10:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 218981BAE6 for ; Sun, 30 Jun 2019 20:08:44 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OO028454 for ; Sun, 30 Jun 2019 11:08:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=f0cmNA01kKoRIJOtL3LiHHHHDNRIB3hfBPqMXAoESaQ=; b=SCR66OSEQozmYYgaBC9k8yu3nGkyqu5jwuT0vmnGwUSg4U70sPjbQYpfASJdLdAG0rQW YVCnjLOazHZTNhYM07b7n6WfrmGu1u8l9pu37PJz/iWLjjdiF9tsZLKOQexK8bBWsp/H sbEGBi7PqYdh3jx2UArRn3EnVN6amRtnSwVPVqjnumaa9VY6pk7MWXgcXA7YMjO/VXks FT5sOGSsF65ovuAgawhmNzLn+Cgp/rrjHsv1Mo+KcBJZNUES/iC1BOkbc19ycNPYvswx TG7HPL0bSobkI/c2ecevOvK32oI8P2O7nCRyj9/dbLWW+p9deuw/5kDvg55FOZdZhrdC iw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gn5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:44 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:43 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:43 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 748BA3F703F; Sun, 30 Jun 2019 11:08:41 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:55 +0530 Message-ID: <20190630180609.36705-44-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 43/57] net/octeontx2: implement VLAN utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Implement accessory functions needed for VLAN functionality. Introduce VLAN related structures as well. Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 10 ++ drivers/net/octeontx2/otx2_ethdev.h | 46 +++++++ drivers/net/octeontx2/otx2_vlan.c | 190 ++++++++++++++++++++++++++++ 5 files changed, 248 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_vlan.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 21559f631..d22ddae33 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_ptp.c \ otx2_flow.c \ otx2_link.c \ + otx2_vlan.c \ otx2_stats.c \ otx2_lookup.c \ otx2_ethdev.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index f0e03bffe..6281ee21b 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -9,6 +9,7 @@ sources = files( 'otx2_ptp.c', 'otx2_flow.c', 'otx2_link.c', + 'otx2_vlan.c', 'otx2_stats.c', 'otx2_lookup.c', 'otx2_ethdev.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 09201fd23..48c2e8f57 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1083,6 +1083,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { otx2_nix_rxchan_bpid_cfg(eth_dev, false); + otx2_nix_vlan_fini(eth_dev); oxt2_nix_unregister_queue_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); rc = nix_store_queue_cfg_and_then_release(eth_dev); @@ -1129,6 +1130,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + rc = otx2_nix_vlan_offload_init(eth_dev); + if (rc) { + otx2_err("Failed to init vlan offload rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -1547,6 +1554,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Disable vlan offloads */ + otx2_nix_vlan_fini(eth_dev); + /* Disable other rte_flow entries */ otx2_flow_fini(dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index ad12f2553..8577272b4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -182,6 +182,47 @@ struct otx2_fc_info { uint16_t bpid[NIX_MAX_CHAN]; }; +struct vlan_mkex_info { + struct npc_xtract_info la_xtract; + struct npc_xtract_info lb_xtract; + uint64_t lb_lt_offset; +}; + +struct vlan_entry { + uint32_t mcam_idx; + uint16_t vlan_id; + TAILQ_ENTRY(vlan_entry) next; +}; + +TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry); + +struct otx2_vlan_info { + struct otx2_vlan_filter_tbl fltr_tbl; + /* MKEX layer info */ + struct mcam_entry def_tx_mcam_ent; + struct mcam_entry def_rx_mcam_ent; + struct vlan_mkex_info mkex; + /* Default mcam entry that matches vlan packets */ + uint32_t def_rx_mcam_idx; + uint32_t def_tx_mcam_idx; + /* MCAM entry that matches double vlan packets */ + uint32_t qinq_mcam_idx; + /* Indices of tx_vtag def registers */ + uint32_t outer_vlan_idx; + uint32_t inner_vlan_idx; + uint16_t outer_vlan_tpid; + uint16_t inner_vlan_tpid; + uint16_t pvid; + /* QinQ entry allocated before default one */ + uint8_t qinq_before_def; + uint8_t pvid_insert_on; + /* Rx vtag action type */ + uint8_t vtag_type_idx; + uint8_t filter_on; + uint8_t strip_on; + uint8_t qinq_on; +}; + struct otx2_eth_dev { OTX2_DEV; /* Base class */ MARKER otx2_eth_dev_data_start; @@ -233,6 +274,7 @@ struct otx2_eth_dev { uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; + struct otx2_vlan_info vlan_info; struct otx2_eth_qconf *tx_qconf; struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; @@ -402,6 +444,10 @@ int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb); int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev); +/* VLAN */ +int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); +int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); + /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c new file mode 100644 index 000000000..b3136d2cf --- /dev/null +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + + +#define VLAN_ID_MATCH 0x1 +#define VTAG_F_MATCH 0x2 +#define MAC_ADDR_MATCH 0x4 +#define QINQ_F_MATCH 0x8 +#define VLAN_DROP 0x10 + +enum vtag_cfg_dir { + VTAG_TX, + VTAG_RX +}; + +static int +__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, + uint32_t entry, const int enable) +{ + struct npc_mcam_ena_dis_entry_req *req; + struct otx2_mbox *mbox = dev->mbox; + int rc = -EINVAL; + + if (enable) + req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox); + else + req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox); + + req->entry = entry; + + rc = otx2_mbox_process_msg(mbox, NULL); + return rc; +} + +static int +__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) +{ + struct npc_mcam_free_entry_req *req; + struct otx2_mbox *mbox = dev->mbox; + int rc = -EINVAL; + + req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox); + req->entry = entry; + + rc = otx2_mbox_process_msg(mbox, NULL); + return rc; +} + +static int +__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, + struct mcam_entry *entry, uint8_t intf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct npc_mcam_write_entry_req *req; + struct otx2_mbox *mbox = dev->mbox; + struct msghdr *rsp; + int rc = -EINVAL; + + req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox); + + req->entry = ent_idx; + req->intf = intf; + req->enable_entry = 1; + memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + return rc; +} + +static int +__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, + uint8_t intf, bool drop) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct npc_mcam_alloc_and_write_entry_req *req; + struct npc_mcam_alloc_and_write_entry_rsp *rsp; + struct otx2_mbox *mbox = dev->mbox; + int rc = -EINVAL; + + req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox); + + if (intf == NPC_MCAM_RX) { + if (!drop && dev->vlan_info.def_rx_mcam_idx) { + req->priority = NPC_MCAM_HIGHER_PRIO; + req->ref_entry = dev->vlan_info.def_rx_mcam_idx; + } else if (drop && dev->vlan_info.qinq_mcam_idx) { + req->priority = NPC_MCAM_LOWER_PRIO; + req->ref_entry = dev->vlan_info.qinq_mcam_idx; + } else { + req->priority = NPC_MCAM_ANY_PRIO; + req->ref_entry = 0; + } + } else { + req->priority = NPC_MCAM_ANY_PRIO; + req->ref_entry = 0; + } + + req->intf = intf; + req->enable_entry = 1; + memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + return rsp->entry; +} + +static int +nix_vlan_rx_mkex_offset(uint64_t mask) +{ + int nib_count = 0; + + while (mask) { + nib_count += mask & 1; + mask >>= 1; + } + + return nib_count * 4; +} + +static int +nix_vlan_get_mkex_info(struct otx2_eth_dev *dev) +{ + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + struct otx2_npc_flow_info *npc = &dev->npc_flow; + struct npc_xtract_info *x_info = NULL; + uint64_t rx_keyx; + otx2_dxcfg_t *p; + int rc = -EINVAL; + + if (npc == NULL) { + otx2_err("Missing npc mkex configuration"); + return rc; + } + +#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL +#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL +#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL + + rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX]; + if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA) + return rc; + + if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) != + NPC_KEX_LB_LTYPE_NIBBLE_ENA) + return rc; + + mkex->lb_lt_offset = + nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK); + + p = &npc->prx_dxcfg; + x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0]; + memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info)); + x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0]; + memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info)); + + return 0; +} + +int +otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + /* Port initialized for first time or restarted */ + if (!dev->configured) { + rc = nix_vlan_get_mkex_info(dev); + if (rc) { + otx2_err("Failed to get vlan mkex info rc=%d", rc); + return rc; + } + } + return 0; +} + +int +otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev) +{ + return 0; +} From patchwork Sun Jun 30 18:05:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55713 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C81371BBD4; Sun, 30 Jun 2019 20:10:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C12171BABD for ; Sun, 30 Jun 2019 20:08:48 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5FMg027215; Sun, 30 Jun 2019 11:08:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7zhVbL+BrmU9w49eb3n4Ak+UBhKFz/xL5IVwDo2ruY0=; b=CuPeKW11AxxHCbGeoMNFuboYQ/r6GE2+HrLzQBIlb7YalunyXcbI8oD5GM6C3ZfMTYjS X4qseYenX/AEindODlQKQsuM4R+RIy3KGRlkf4QOelnKtBmuMerk+sJpK/oqoy5mQFct +quVOIdv39er2j5/M6sDjRKOr5K+wy+sDTaDaT6SAfQe81W+LJ4jBRUnG1U5PAF05pzy r/vTQYoaN9DN1EcI+AYrLhLzVWHWvAEQD2itOzXoUiyQLstONJq7Vf1dCohdr9xx3sQp Bpw33fSxucC9ASEfoRwRdL1BDs4r5eGzXJVP1u4ZE0X5SBvl+ZXNLejkApKPF+V/sv5D 6Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gnb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:47 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:46 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 6275D3F703F; Sun, 30 Jun 2019 11:08:44 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:56 +0530 Message-ID: <20190630180609.36705-45-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 44/57] net/octeontx2: support VLAN offloads X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Support configuring VLAN offloads for an ethernet device and dynamic promiscuous mode configuration for VLAN filters where filters are updated according to promiscuous mode of the device. Signed-off-by: Vivek Sharma --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 3 + drivers/net/octeontx2/otx2_ethdev_ops.c | 1 + drivers/net/octeontx2/otx2_rx.h | 1 + drivers/net/octeontx2/otx2_vlan.c | 523 ++++++++++++++++++++- 9 files changed, 527 insertions(+), 9 deletions(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 33d2f2785..ac4712b0c 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 980a4daf9..e54c1babe 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 330534a90..769ab16ee 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -18,6 +18,8 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 0f1756932..a53b71a6d 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -24,6 +24,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Receiver Side Scaling (RSS) - MAC filtering - Generic flow API +- VLAN/QinQ stripping and insertion - Port hardware statistics - Link state information - Link flow control diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 48c2e8f57..55a5cdc48 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1345,6 +1345,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .timesync_adjust_time = otx2_nix_timesync_adjust_time, .timesync_read_time = otx2_nix_timesync_read_time, .timesync_write_time = otx2_nix_timesync_write_time, + .vlan_offload_set = otx2_nix_vlan_offload_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8577272b4..50fd18b6e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -221,6 +221,7 @@ struct otx2_vlan_info { uint8_t filter_on; uint8_t strip_on; uint8_t qinq_on; + uint8_t promisc_on; }; struct otx2_eth_dev { @@ -447,6 +448,8 @@ int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev); /* VLAN */ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); +int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask); +void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable); /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index e55acd4e0..690d8ac0c 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -40,6 +40,7 @@ otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en) otx2_mbox_process(mbox); eth_dev->data->promiscuous = en; + otx2_nix_vlan_update_promisc(eth_dev, en); } void diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index e18e04658..7dc34d705 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -16,6 +16,7 @@ sizeof(uint16_t)) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index b3136d2cf..7cf4f3136 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -14,6 +14,7 @@ #define MAC_ADDR_MATCH 0x4 #define QINQ_F_MATCH 0x8 #define VLAN_DROP 0x10 +#define DEF_F_ENTRY 0x20 enum vtag_cfg_dir { VTAG_TX, @@ -39,8 +40,50 @@ __rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, return rc; } +static void +nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, bool qinq, bool drop) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int pcifunc = otx2_pfvf_func(dev->pf, dev->vf); + uint64_t action = 0, vtag_action = 0; + + action = NIX_RX_ACTIONOP_UCAST; + + if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) { + action = NIX_RX_ACTIONOP_RSS; + action |= (uint64_t)(dev->rss_info.alg_idx) << 56; + } + + action |= (uint64_t)pcifunc << 4; + entry->action = action; + + if (drop) { + entry->action &= ~((uint64_t)0xF); + entry->action |= NIX_RX_ACTIONOP_DROP; + return; + } + + if (!qinq) { + /* VTAG0 fields denote CTAG in single vlan case */ + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); + vtag_action |= (NPC_LID_LB << 8); + vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR; + } else { + /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */ + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); + vtag_action |= (NPC_LID_LB << 8); + vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR; + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47); + vtag_action |= ((uint64_t)(NPC_LID_LB) << 40); + vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32); + } + + entry->vtag_action = vtag_action; +} + static int -__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) +nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) { struct npc_mcam_free_entry_req *req; struct otx2_mbox *mbox = dev->mbox; @@ -54,8 +97,8 @@ __rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) } static int -__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, - struct mcam_entry *entry, uint8_t intf) +nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, + struct mcam_entry *entry, uint8_t intf, uint8_t ena) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct npc_mcam_write_entry_req *req; @@ -67,7 +110,7 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, req->entry = ent_idx; req->intf = intf; - req->enable_entry = 1; + req->enable_entry = ena; memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); rc = otx2_mbox_process_msg(mbox, (void *)&rsp); @@ -75,9 +118,9 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, } static int -__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, - struct mcam_entry *entry, - uint8_t intf, bool drop) +nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, + uint8_t intf, bool drop) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct npc_mcam_alloc_and_write_entry_req *req; @@ -114,6 +157,443 @@ __rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, return rsp->entry; } +static void +nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index, + int enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + volatile uint8_t *key_data, *key_mask; + struct npc_mcam_read_entry_req *req; + struct npc_mcam_read_entry_rsp *rsp; + struct otx2_mbox *mbox = dev->mbox; + uint64_t mcam_data, mcam_mask; + struct mcam_entry entry; + uint8_t intf, mcam_ena; + int idx, rc = -EINVAL; + uint8_t *mac_addr; + + memset(&entry, 0, sizeof(struct mcam_entry)); + + /* Read entry first */ + req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox); + + req->entry = mcam_index; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to read entry %d", mcam_index); + return; + } + + entry = rsp->entry_data; + intf = rsp->intf; + mcam_ena = rsp->enable; + + /* Update mcam address */ + key_data = (volatile uint8_t *)entry.kw; + key_mask = (volatile uint8_t *)entry.kw_mask; + + if (enable) { + mcam_mask = 0; + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + + } else { + mcam_data = 0ULL; + mac_addr = dev->mac_addr; + for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--) + mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx); + + mcam_mask = BIT_ULL(48) - 1; + + otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off, + &mcam_data, mkex->la_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + } + + /* Write back the mcam entry */ + rc = nix_vlan_mcam_write(eth_dev, mcam_index, + &entry, intf, mcam_ena); + if (rc) { + otx2_err("Failed to write entry %d", mcam_index); + return; + } +} + +void +otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; + + /* Already in required mode */ + if (enable == vlan->promisc_on) + return; + + /* Update default rx entry */ + if (vlan->def_rx_mcam_idx) + nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable); + + /* Update all other rx filter entries */ + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) + nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable); + + vlan->promisc_on = enable; +} + +/* Configure mcam entry with required MCAM search rules */ +static int +nix_vlan_mcam_config(struct rte_eth_dev *eth_dev, + uint16_t vlan_id, uint16_t flags) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + volatile uint8_t *key_data, *key_mask; + uint64_t mcam_data, mcam_mask; + struct mcam_entry entry; + uint8_t *mac_addr; + int idx, kwi = 0; + + memset(&entry, 0, sizeof(struct mcam_entry)); + key_data = (volatile uint8_t *)entry.kw; + key_mask = (volatile uint8_t *)entry.kw_mask; + + /* Channel base extracted to KW0[11:0] */ + entry.kw[kwi] = dev->rx_chan_base; + entry.kw_mask[kwi] = BIT_ULL(12) - 1; + + /* Adds vlan_id & LB CTAG flag to MCAM KW */ + if (flags & VLAN_ID_MATCH) { + entry.kw[kwi] |= NPC_LT_LB_CTAG << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset; + + mcam_data = (vlan_id << 16); + mcam_mask = (BIT_ULL(16) - 1) << 16; + otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off, + &mcam_data, mkex->lb_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off, + &mcam_mask, mkex->lb_xtract.len + 1); + } + + /* Adds LB STAG flag to MCAM KW */ + if (flags & QINQ_F_MATCH) { + entry.kw[kwi] |= NPC_LT_LB_STAG << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset; + } + + /* Adds LB CTAG & LB STAG flags to MCAM KW */ + if (flags & VTAG_F_MATCH) { + entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG) + << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= (NPC_LT_LB_CTAG & NPC_LT_LB_STAG) + << mkex->lb_lt_offset; + } + + /* Adds port MAC address to MCAM KW */ + if (flags & MAC_ADDR_MATCH) { + mcam_data = 0ULL; + mac_addr = dev->mac_addr; + for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--) + mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx); + + mcam_mask = BIT_ULL(48) - 1; + otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off, + &mcam_data, mkex->la_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + } + + /* VLAN_DROP: for drop action for all vlan packets when filter is on. + * For QinQ, enable vtag action for both outer & inner tags + */ + if (flags & VLAN_DROP) + nix_set_rx_vlan_action(eth_dev, &entry, false, true); + else if (flags & QINQ_F_MATCH) + nix_set_rx_vlan_action(eth_dev, &entry, true, false); + else + nix_set_rx_vlan_action(eth_dev, &entry, false, false); + + if (flags & DEF_F_ENTRY) + dev->vlan_info.def_rx_mcam_ent = entry; + + return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX, + flags & VLAN_DROP); +} + +/* Installs/Removes/Modifies default rx entry */ +static int +nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip, + bool filter, bool enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + uint16_t flags = 0; + int mcam_idx, rc; + + /* Use default mcam entry to either drop vlan traffic when + * vlan filter is on or strip vtag when strip is enabled. + * Allocate default entry which matches port mac address + * and vtag(ctag/stag) flags with drop action. + */ + if (!vlan->def_rx_mcam_idx) { + if (!eth_dev->data->promiscuous) + flags = MAC_ADDR_MATCH; + + if (filter && enable) + flags |= VTAG_F_MATCH | VLAN_DROP; + else if (strip && enable) + flags |= VTAG_F_MATCH; + else + return 0; + + flags |= DEF_F_ENTRY; + + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags); + if (mcam_idx < 0) { + otx2_err("Failed to config vlan mcam"); + return -mcam_idx; + } + + vlan->def_rx_mcam_idx = mcam_idx; + return 0; + } + + /* Filter is already enabled, so packets would be dropped anyways. No + * processing needed for enabling strip wrt mcam entry. + */ + + /* Filter disable request */ + if (vlan->filter_on && filter && !enable) { + vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF); + + /* Free default rx entry only when + * 1. strip is not on and + * 2. qinq entry is allocated before default entry. + */ + if (vlan->strip_on || + (vlan->qinq_on && !vlan->qinq_before_def)) { + if (eth_dev->data->dev_conf.rxmode.mq_mode == + ETH_MQ_RX_RSS) + vlan->def_rx_mcam_ent.action |= + NIX_RX_ACTIONOP_RSS; + else + vlan->def_rx_mcam_ent.action |= + NIX_RX_ACTIONOP_UCAST; + return nix_vlan_mcam_write(eth_dev, + vlan->def_rx_mcam_idx, + &vlan->def_rx_mcam_ent, + NIX_INTF_RX, 1); + } else { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + } + + /* Filter enable request */ + if (!vlan->filter_on && filter && enable) { + vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF); + vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP; + return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx, + &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1); + } + + /* Strip disable request */ + if (vlan->strip_on && strip && !enable) { + if (!vlan->filter_on && + !(vlan->qinq_on && !vlan->qinq_before_def)) { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + } + + return 0; +} + +/* Configure vlan stripping on or off */ +static int +nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_vtag_config *vtag_cfg; + int rc = -EINVAL; + + rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable); + if (rc) { + otx2_err("Failed to config default rx entry"); + return rc; + } + + vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox); + /* cfg_type = 1 for rx vlan cfg */ + vtag_cfg->cfg_type = VTAG_RX; + + if (enable) + vtag_cfg->rx.strip_vtag = 1; + else + vtag_cfg->rx.strip_vtag = 0; + + /* Always capture */ + vtag_cfg->rx.capture_vtag = 1; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + /* Use rx vtag type index[0] for now */ + vtag_cfg->rx.vtag_type = 0; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + dev->vlan_info.strip_on = enable; + return rc; +} + +/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */ +static int +nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, + uint16_t vlan_id) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc = -EINVAL; + + if (!vlan_id && enable) { + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, + enable); + if (rc) { + otx2_err("Failed to config vlan mcam"); + return rc; + } + dev->vlan_info.filter_on = enable; + return 0; + } + + if (!vlan_id && !enable) { + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, + enable); + if (rc) { + otx2_err("Failed to config vlan mcam"); + return rc; + } + dev->vlan_info.filter_on = enable; + return 0; + } + + return 0; +} + +/* Configure double vlan(qinq) on or off */ +static int +otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev, + const uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan_info; + int mcam_idx; + int rc; + + vlan_info = &dev->vlan_info; + + if (!enable) { + if (!vlan_info->qinq_mcam_idx) + return 0; + + rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx); + if (rc) + return rc; + + vlan_info->qinq_mcam_idx = 0; + dev->vlan_info.qinq_on = 0; + vlan_info->qinq_before_def = 0; + return 0; + } + + if (eth_dev->data->promiscuous) + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH); + else + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, + QINQ_F_MATCH | MAC_ADDR_MATCH); + if (mcam_idx < 0) + return mcam_idx; + + if (!vlan_info->def_rx_mcam_idx) + vlan_info->qinq_before_def = 1; + + vlan_info->qinq_mcam_idx = mcam_idx; + dev->vlan_info.qinq_on = 1; + return 0; +} + +int +otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t offloads = dev->rx_offloads; + struct rte_eth_rxmode *rxmode; + int rc; + + rxmode = ð_dev->data->dev_conf.rxmode; + + if (mask & ETH_VLAN_EXTEND_MASK) { + otx2_err("Extend offload not supported"); + return -ENOTSUP; + } + + if (mask & ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + rc = nix_vlan_hw_strip(eth_dev, true); + } else { + offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + rc = nix_vlan_hw_strip(eth_dev, false); + } + if (rc) + goto done; + } + + if (mask & ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) { + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; + rc = nix_vlan_hw_filter(eth_dev, true, 0); + } else { + offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER; + rc = nix_vlan_hw_filter(eth_dev, false, 0); + } + if (rc) + goto done; + } + + if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) { + if (!dev->vlan_info.qinq_on) { + offloads |= DEV_RX_OFFLOAD_QINQ_STRIP; + rc = otx2_nix_config_double_vlan(eth_dev, true); + if (rc) + goto done; + } + } else { + if (dev->vlan_info.qinq_on) { + offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP; + rc = otx2_nix_config_double_vlan(eth_dev, false); + if (rc) + goto done; + } + } + + if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP)) { + dev->rx_offloads |= offloads; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + } + +done: + return rc; +} + static int nix_vlan_rx_mkex_offset(uint64_t mask) { @@ -170,7 +650,7 @@ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); - int rc; + int rc, mask; /* Port initialized for first time or restarted */ if (!dev->configured) { @@ -179,12 +659,37 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) otx2_err("Failed to get vlan mkex info rc=%d", rc); return rc; } + + TAILQ_INIT(&dev->vlan_info.fltr_tbl); } + + mask = + ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK; + rc = otx2_nix_vlan_offload_set(eth_dev, mask); + if (rc) { + otx2_err("Failed to set vlan offload rc=%d", rc); + return rc; + } + return 0; } int -otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev) +otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + int rc; + + if (!dev->configured) { + if (vlan->def_rx_mcam_idx) { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + } + } + + otx2_nix_config_double_vlan(eth_dev, false); + vlan->def_rx_mcam_idx = 0; return 0; } From patchwork Sun Jun 30 18:05:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55731 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A3B9E1BC07; Sun, 30 Jun 2019 20:11:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 78F7A1BB21 for ; Sun, 30 Jun 2019 20:08:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI54D9015680; Sun, 30 Jun 2019 11:08:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=9KCarRjYSUSZl3y9rDpcRHWSRMstXNOxLwRv1Cg9r7o=; b=Ypvki5E6Z4Zn2+rB2tUpajZ2xg60g3gqj+32GKXV7jTCRxAon4YZxgwls0rIVuD7cqDF L0FqGWzMH9M8ptGKLT3z4YVDZXkq+SX8gALi2Hw/B4Agc4GPJac94GVVPxXpkMotcm3o LoFAwuwbJEQ6lwKxtcGqdBpXHZrhGo3A5XdhQmC3L6QRXuip4vHgKkldbIWpnhIajPNe daUxrAa5czPVNMISrsQJmdP2bEKaKE4cBgijBQIK8nhuXT68lINKCq1U0NngSMQFtRaK b9czuF8g/GAaFF3xuqOJnlqzH7U5o+zRmAmfun1fKYNsReJ4aXN1QSNhDMWdlU6pdrAr wg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yeg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:51 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:49 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:49 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A42763F703F; Sun, 30 Jun 2019 11:08:47 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:57 +0530 Message-ID: <20190630180609.36705-46-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 45/57] net/octeontx2: support VLAN filters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Support setting up VLAN filters so as to allow tagged packet's reception after VLAN HW Filter offload is enabled. Signed-off-by: Vivek Sharma --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + doc/guides/nics/octeontx2.rst | 2 +- drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 4 + drivers/net/octeontx2/otx2_vlan.c | 149 ++++++++++++++++++++- 7 files changed, 157 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index ac4712b0c..37b802999 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y Flow control = Y Flow API = Y VLAN offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index e54c1babe..ccedd1359 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y Flow control = Y Flow API = Y VLAN offload = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 769ab16ee..24df14717 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -17,6 +17,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y Flow API = Y VLAN offload = Y QinQ offload = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index a53b71a6d..d6082e508 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -22,7 +22,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Lock-free Tx queue - Multiple queues for TX and RX - Receiver Side Scaling (RSS) -- MAC filtering +- MAC/VLAN filtering - Generic flow API - VLAN/QinQ stripping and insertion - Port hardware statistics diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 55a5cdc48..78cbd8811 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1346,6 +1346,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .timesync_read_time = otx2_nix_timesync_read_time, .timesync_write_time = otx2_nix_timesync_write_time, .vlan_offload_set = otx2_nix_vlan_offload_set, + .vlan_filter_set = otx2_nix_vlan_filter_set, + .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 50fd18b6e..996ddec47 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -450,6 +450,10 @@ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask); void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable); +int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, + int on); +void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev, + uint16_t queue, int on); /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index 7cf4f3136..6216d6545 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -22,8 +22,8 @@ enum vtag_cfg_dir { }; static int -__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, - uint32_t entry, const int enable) +nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, + uint32_t entry, const int enable) { struct npc_mcam_ena_dis_entry_req *req; struct otx2_mbox *mbox = dev->mbox; @@ -460,6 +460,8 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, uint16_t vlan_id) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; int rc = -EINVAL; if (!vlan_id && enable) { @@ -473,6 +475,24 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, return 0; } + /* Enable/disable existing vlan filter entries */ + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (vlan_id) { + if (entry->vlan_id == vlan_id) { + rc = nix_vlan_mcam_enb_dis(dev, + entry->mcam_idx, + enable); + if (rc) + return rc; + } + } else { + rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx, + enable); + if (rc) + return rc; + } + } + if (!vlan_id && !enable) { rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, enable); @@ -487,6 +507,85 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, return 0; } +/* Enable/disable vlan filtering for the given vlan_id */ +int +otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, + int on) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; + int entry_exists = 0; + int rc = -EINVAL; + int mcam_idx; + + if (!vlan_id) { + otx2_err("Vlan Id can't be zero"); + return rc; + } + + if (!vlan->def_rx_mcam_idx) { + otx2_err("Vlan Filtering is disabled, enable it first"); + return rc; + } + + if (on) { + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (entry->vlan_id == vlan_id) { + /* Vlan entry already exists */ + entry_exists = 1; + /* Mcam entry already allocated */ + if (entry->mcam_idx) { + rc = nix_vlan_hw_filter(eth_dev, on, + vlan_id); + return rc; + } + break; + } + } + + if (!entry_exists) { + entry = rte_zmalloc("otx2_nix_vlan_entry", + sizeof(struct vlan_entry), 0); + if (!entry) { + otx2_err("Failed to allocate memory"); + return -ENOMEM; + } + } + + /* Enables vlan_id & mac address based filtering */ + if (eth_dev->data->promiscuous) + mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id, + VLAN_ID_MATCH); + else + mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id, + VLAN_ID_MATCH | + MAC_ADDR_MATCH); + if (mcam_idx < 0) { + otx2_err("Failed to config vlan mcam"); + TAILQ_REMOVE(&vlan->fltr_tbl, entry, next); + rte_free(entry); + return mcam_idx; + } + + entry->mcam_idx = mcam_idx; + if (!entry_exists) { + entry->vlan_id = vlan_id; + TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next); + } + } else { + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (entry->vlan_id == vlan_id) { + nix_vlan_mcam_free(dev, entry->mcam_idx); + TAILQ_REMOVE(&vlan->fltr_tbl, entry, next); + rte_free(entry); + break; + } + } + } + return 0; +} + /* Configure double vlan(qinq) on or off */ static int otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev, @@ -594,6 +693,13 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) return rc; } +void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t queue, + __rte_unused int on) +{ + otx2_err("Not Supported"); +} + static int nix_vlan_rx_mkex_offset(uint64_t mask) { @@ -646,6 +752,27 @@ nix_vlan_get_mkex_info(struct otx2_eth_dev *dev) return 0; } +static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_entry *entry; + int rc; + + /* VLAN filters can't be set without setting filtern on */ + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true); + if (rc) { + otx2_err("Failed to reinstall vlan filters"); + return; + } + + TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) { + rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true); + if (rc) + otx2_err("Failed to reinstall filter for vlan:%d", + entry->vlan_id); + } +} + int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) { @@ -661,6 +788,11 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) } TAILQ_INIT(&dev->vlan_info.fltr_tbl); + } else { + /* Reinstall all mcam entries now if filter offload is set */ + if (eth_dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER) + nix_vlan_reinstall_vlan_filters(eth_dev); } mask = @@ -679,8 +811,21 @@ otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; int rc; + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (!dev->configured) { + TAILQ_REMOVE(&vlan->fltr_tbl, entry, next); + rte_free(entry); + } else { + /* MCAM entries freed by flow_fini & lf_free on + * port stop. + */ + entry->mcam_idx = 0; + } + } + if (!dev->configured) { if (vlan->def_rx_mcam_idx) { rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); From patchwork Sun Jun 30 18:05:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55714 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 954FE1BA9F; Sun, 30 Jun 2019 20:10:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 8FF531BB21 for ; Sun, 30 Jun 2019 20:08:54 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6fuA028350 for ; Sun, 30 Jun 2019 11:08:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=rDCnFMTgU6X/c2v/R7nx1D/+L7oeqJ/Jr40tFuFdrtg=; b=e73dP+GaX0RrzjRYDEsT5NuzIAzSCXMnBI8bHvbeaX4oaeSPvsDAuSQXpgtpws5EkjZq YuN663ykaEwhDfJXMyAPY+H52PaO4XpUDjt0O0v2nFs0LdKg4dudGYHD071Fpj/lD7Tx pY3V8fbZEl7YsVde3SYPl/oOwKknpBMFI+aay9bpzzVDVqwWw+dS5lE2NHmGG7AxNq9Q b4j7bCAp8ErNGGnM3lFs0XQ3OUWF6OKQ8skc/5/EN5leeh9bAN2PIzCCa2BolBczxlLQ WoW1NQnmdTZmZBP/G08/YF/SHzQ9J7eWWcwerjq+JQvm8N3jXGTv2nATwC1BFJeyX4Zc Qg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gnj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:08:53 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:52 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C223A3F703F; Sun, 30 Jun 2019 11:08:50 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vivek Sharma Date: Sun, 30 Jun 2019 23:35:58 +0530 Message-ID: <20190630180609.36705-47-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 46/57] net/octeontx2: support VLAN TPID and PVID for Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Implement support for setting VLAN TPID and PVID for Tx packets. Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 5 +- drivers/net/octeontx2/otx2_vlan.c | 193 ++++++++++++++++++++++++++++ 3 files changed, 199 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 78cbd8811..ad305dcd8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1348,6 +1348,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .vlan_offload_set = otx2_nix_vlan_offload_set, .vlan_filter_set = otx2_nix_vlan_filter_set, .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set, + .vlan_tpid_set = otx2_nix_vlan_tpid_set, + .vlan_pvid_set = otx2_nix_vlan_pvid_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 996ddec47..12db92257 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -453,7 +453,10 @@ void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable); int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on); void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev, - uint16_t queue, int on); + uint16_t queue, int on); +int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev, + enum rte_vlan_type type, uint16_t tpid); +int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index 6216d6545..dc0f4e032 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -82,6 +82,39 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev, entry->vtag_action = vtag_action; } +static void +nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type, + int vtag_index) +{ + union { + uint64_t reg; + struct nix_tx_vtag_action_s act; + } vtag_action; + + uint64_t action; + + action = NIX_TX_ACTIONOP_UCAST_DEFAULT; + + /* + * Take offset from LA since in case of untagged packet, + * lbptr is zero. + */ + if (type == ETH_VLAN_TYPE_OUTER) { + vtag_action.act.vtag0_def = vtag_index; + vtag_action.act.vtag0_lid = NPC_LID_LA; + vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT; + vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR; + } else { + vtag_action.act.vtag1_def = vtag_index; + vtag_action.act.vtag1_lid = NPC_LID_LA; + vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT; + vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR; + } + + entry->action = action; + entry->vtag_action = vtag_action.reg; +} + static int nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) { @@ -416,6 +449,46 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip, return 0; } +/* Installs/Removes default tx entry */ +static int +nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev, + enum rte_vlan_type type, int vtag_index, + int enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct mcam_entry entry; + uint16_t pf_func; + int rc; + + if (!vlan->def_tx_mcam_idx && enable) { + memset(&entry, 0, sizeof(struct mcam_entry)); + + /* Only pf_func is matched, swap it's bytes */ + pf_func = (dev->pf_func & 0xff) << 8; + pf_func |= (dev->pf_func >> 8) & 0xff; + + /* PF Func extracted to KW1[63:48] */ + entry.kw[1] = (uint64_t)pf_func << 48; + entry.kw_mask[1] = (BIT_ULL(16) - 1) << 48; + + nix_set_tx_vlan_action(&entry, type, vtag_index); + vlan->def_tx_mcam_ent = entry; + + return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, + NIX_INTF_TX, 0); + } + + if (vlan->def_tx_mcam_idx && !enable) { + rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + + return 0; +} + /* Configure vlan stripping on or off */ static int nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable) @@ -693,6 +766,126 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) return rc; } +int +otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev, + enum rte_vlan_type type, uint16_t tpid) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct nix_set_vlan_tpid *tpid_cfg; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox); + + tpid_cfg->tpid = tpid; + if (type == ETH_VLAN_TYPE_OUTER) + tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER; + else + tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + if (type == ETH_VLAN_TYPE_OUTER) + dev->vlan_info.outer_vlan_tpid = tpid; + else + dev->vlan_info.inner_vlan_tpid = tpid; + return 0; +} + +int +otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev); + struct otx2_mbox *mbox = otx2_dev->mbox; + struct nix_vtag_config *vtag_cfg; + struct nix_vtag_config_rsp *rsp; + struct otx2_vlan_info *vlan; + int rc, rc1, vtag_index = 0; + + if (vlan_id == 0) { + otx2_err("vlan id can't be zero"); + return -EINVAL; + } + + vlan = &otx2_dev->vlan_info; + + if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) { + otx2_err("pvid %d is already enabled", vlan_id); + return -EINVAL; + } + + if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) { + otx2_err("another pvid is enabled, disable that first"); + return -EINVAL; + } + + /* No pvid active */ + if (!on && !vlan->pvid_insert_on) + return 0; + + /* Given pvid already disabled */ + if (!on && vlan->pvid != vlan_id) + return 0; + + vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox); + + if (on) { + vtag_cfg->cfg_type = VTAG_TX; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + + if (vlan->outer_vlan_tpid) + vtag_cfg->tx.vtag0 = + (vlan->outer_vlan_tpid << 16) | vlan_id; + else + vtag_cfg->tx.vtag0 = + ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id); + vtag_cfg->tx.cfg_vtag0 = 1; + } else { + vtag_cfg->cfg_type = VTAG_TX; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + + vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx; + vtag_cfg->tx.free_vtag0 = 1; + } + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (on) { + vtag_index = rsp->vtag0_idx; + } else { + vlan->pvid = 0; + vlan->pvid_insert_on = 0; + vlan->outer_vlan_idx = 0; + } + + rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER, + vtag_index, on); + if (rc < 0) { + printf("Default tx entry failed with rc %d\n", rc); + vtag_cfg->tx.vtag0_idx = vtag_index; + vtag_cfg->tx.free_vtag0 = 1; + vtag_cfg->tx.cfg_vtag0 = 0; + + rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc1) + otx2_err("Vtag free failed"); + + return rc; + } + + if (on) { + vlan->pvid = vlan_id; + vlan->pvid_insert_on = 1; + vlan->outer_vlan_idx = vtag_index; + } + + return 0; +} + void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev, __rte_unused uint16_t queue, __rte_unused int on) From patchwork Sun Jun 30 18:05:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55715 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 51F5B1BA68; Sun, 30 Jun 2019 20:10:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 8EECC1BB46 for ; Sun, 30 Jun 2019 20:08:57 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5FMh027215; Sun, 30 Jun 2019 11:08:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=qsjJFi4yxwNjYZfJkNEQ21fUR/LGlV6qgIgp22JSQsU=; b=PFKX2A1/3FfNxzZ5l+XzxacppSbByyBpbVljawHZLcknO1IvocFDXr9jMRchZoxPOqga SjZq5rZ4jNDpqBiJ+0CC/heaNNCQJ8xw9tre0sg3qWlaTWa/mCkINMyhwSVv6cF6/xyX 8IzuwKAjpaTqS2apTOSEcSjbBvvNEwQpNIFMIl+PxFNCd7Y88qNnLh4eQUjxE4MenAU1 Iie4YPtvfHLqGHOBfBmDXk3RJuZixtM08hE67OV8O6xV5ZnOVIn/CGWDALHVgkG2IVgg n7E3pWFh72khYcO82JM23s5MeqnnpCSfsAQidHre/hmO5FeDMsNgOaxmiZW3vr9d9xDB Yw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gnq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:08:56 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:55 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:55 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 8FE533F703F; Sun, 30 Jun 2019 11:08:53 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:35:59 +0530 Message-ID: <20190630180609.36705-48-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 47/57] net/octeontx2: add FW version get operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add firmware version get operation. Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 3 +++ drivers/net/octeontx2/otx2_ethdev_ops.c | 19 +++++++++++++++++++ drivers/net/octeontx2/otx2_flow.c | 7 +++++++ 7 files changed, 33 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 37b802999..211ff93e7 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -33,6 +33,7 @@ Rx descriptor status = Y Basic stats = Y Stats per queue = Y Extended stats = Y +FW version = Y Module EEPROM dump = Y Registers dump = Y Linux VFIO = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index ccedd1359..967a3757d 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -31,6 +31,7 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +FW version = Y Module EEPROM dump = Y Registers dump = Y Linux VFIO = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 24df14717..884167c88 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -26,6 +26,7 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +FW version = Y Module EEPROM dump = Y Registers dump = Y Linux VFIO = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index ad305dcd8..15f46a9bf 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1336,6 +1336,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .filter_ctrl = otx2_nix_dev_filter_ctrl, .get_module_info = otx2_nix_get_module_info, .get_module_eeprom = otx2_nix_get_module_eeprom, + .fw_version_get = otx2_nix_fw_version_get, .flow_ctrl_get = otx2_nix_flow_ctrl_get, .flow_ctrl_set = otx2_nix_flow_ctrl_set, .timesync_enable = otx2_nix_timesync_enable, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 12db92257..e18483969 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -235,6 +235,7 @@ struct otx2_eth_dev { uint8_t lso_tsov4_idx; uint8_t lso_tsov6_idx; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t mkex_pfl_name[MKEX_NAME_LEN]; uint8_t max_mac_entries; uint8_t lf_tx_stats; uint8_t lf_rx_stats; @@ -340,6 +341,8 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev, enum rte_filter_type filter_type, enum rte_filter_op filter_op, void *arg); +int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, + size_t fw_size); int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, struct rte_eth_dev_module_info *modinfo); int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 690d8ac0c..6a3048336 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -210,6 +210,25 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt) return 0; } +int +otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, + size_t fw_size) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc = (int)fw_size; + + if (fw_size > sizeof(dev->mkex_pfl_name)) + rc = sizeof(dev->mkex_pfl_name); + + rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc); + + rc += 1; /* Add the size of '\0' */ + if (fw_size < (uint32_t)rc) + return rc; + + return 0; +} + int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) { diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 94bd85161..3ddecfb23 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -770,6 +770,7 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev) struct otx2_npc_flow_info *npc = &dev->npc_flow; struct npc_get_kex_cfg_rsp *kex_rsp; struct otx2_mbox *mbox = dev->mbox; + char mkex_pfl_name[MKEX_NAME_LEN]; struct otx2_idev_kex_cfg *idev; int rc = 0; @@ -791,6 +792,12 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev) sizeof(struct npc_get_kex_cfg_rsp)); } + otx2_mbox_memcpy(mkex_pfl_name, + idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN); + + strlcpy((char *)dev->mkex_pfl_name, + mkex_pfl_name, sizeof(dev->mkex_pfl_name)); + flow_process_mkex_cfg(npc, &idev->kex_cfg); done: From patchwork Sun Jun 30 18:06:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55720 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 42DCF1BC0A; Sun, 30 Jun 2019 20:11:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 74BC51BB0C for ; Sun, 30 Jun 2019 20:09:01 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI74rL016918 for ; Sun, 30 Jun 2019 11:09:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=0Fd6hRdh1dqmW/ROp5HKOgZlrvbCwJ9KUHd9TcZRVe8=; b=DlALR27PL5fcfC0aWkAsleXMHjGJXNLB8Xs1BSyermS7u5nj++VrKDNwLGGhHiRvoAGz DQOL42NhUoeRnRo4dfyWLZtcY+Hg9M02Wkc8RqVoZNy11WFtT6pN8PlWMmZ5gzmm9XJi IlV2am9xhxpmsrMgx5B7hmpXiGa/wt6RASG8nYkFVxaCmjsa+6hxT125PTabYAo67BdF dULpdO0VNgwsp5PhYceIF+nrxcozAxgcqibQZNUrxZIbesSMTA7Wp+Ko2XYOa7yq8wJx Snq7lDwnJivLI0tKLFskQxx4PMgf6opIVLVS4Be5xU3qKlHXRy8rq3TuEq97Egrcttbu mA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yey-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:09:00 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:08:58 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:08:58 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CCA593F703F; Sun, 30 Jun 2019 11:08:56 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Pavan Nikhilesh , Harman Kalra Date: Sun, 30 Jun 2019 23:36:00 +0530 Message-ID: <20190630180609.36705-49-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 48/57] net/octeontx2: add Rx burst support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Rx burst support. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Harman Kalra --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 2 +- drivers/net/octeontx2/otx2_ethdev.c | 6 - drivers/net/octeontx2/otx2_ethdev.h | 2 + drivers/net/octeontx2/otx2_rx.c | 129 +++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 247 ++++++++++++++++++++++++++++ 6 files changed, 380 insertions(+), 7 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_rx.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index d22ddae33..3e25d2ad4 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -28,6 +28,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_rx.c \ otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 6281ee21b..975b2e715 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -2,7 +2,7 @@ # Copyright(C) 2019 Marvell International Ltd. # -sources = files( +sources = files('otx2_rx.c', 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 15f46a9bf..1f8a22300 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -14,12 +14,6 @@ #include "otx2_ethdev.h" -static inline void -otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) -{ - RTE_SET_USED(eth_dev); -} - static inline void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) { diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index e18483969..22cf86981 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -280,6 +280,7 @@ struct otx2_eth_dev { struct otx2_eth_qconf *tx_qconf; struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; + eth_rx_burst_t rx_pkt_burst_no_offload; /* PTP counters */ bool ptp_en; struct otx2_timesync_info tstamp; @@ -482,6 +483,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev); /* Rx and Tx routines */ +void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev); void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); /* Timesync - PTP routines */ diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c new file mode 100644 index 000000000..4d5223e10 --- /dev/null +++ b/drivers/net/octeontx2/otx2_rx.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" +#include "otx2_rx.h" + +#define NIX_DESCS_PER_LOOP 4 +#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x)) +#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ) + +static inline uint16_t +nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata, + const uint16_t pkts, const uint32_t qmask) +{ + uint32_t available = rxq->available; + + /* Update the available count if cached value is not enough */ + if (unlikely(available < pkts)) { + uint64_t reg, head, tail; + + /* Use LDADDA version to avoid reorder */ + reg = otx2_atomic64_add_sync(wdata, rxq->cq_status); + /* CQ_OP_STATUS operation error */ + if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) || + reg & BIT_ULL(CQ_OP_STAT_CQ_ERR)) + return 0; + + tail = reg & 0xFFFFF; + head = (reg >> 20) & 0xFFFFF; + if (tail < head) + available = tail - head + qmask + 1; + else + available = tail - head; + + rxq->available = available; + } + + return RTE_MIN(pkts, available); +} + +static __rte_always_inline uint16_t +nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t pkts, const uint16_t flags) +{ + struct otx2_eth_rxq *rxq = rx_queue; + const uint64_t mbuf_init = rxq->mbuf_initializer; + const void *lookup_mem = rxq->lookup_mem; + const uint64_t data_off = rxq->data_off; + const uintptr_t desc = rxq->desc; + const uint64_t wdata = rxq->wdata; + const uint32_t qmask = rxq->qmask; + uint16_t packets = 0, nb_pkts; + uint32_t head = rxq->head; + struct nix_cqe_hdr_s *cq; + struct rte_mbuf *mbuf; + + nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask); + + while (packets < nb_pkts) { + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(desc + (CQE_SZ(head + 2)))); + cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head)); + + mbuf = nix_get_mbuf_from_cqe(cq, data_off); + + otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init, + flags); + otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags); + rx_pkts[packets++] = mbuf; + otx2_prefetch_store_keep(mbuf); + head++; + head &= qmask; + } + + rxq->head = head; + rxq->available -= nb_pkts; + + /* Free all the CQs that we've processed */ + otx2_write64((wdata | nb_pkts), rxq->cq_door); + + return nb_pkts; +} + + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_recv_pkts_ ## name(void *rx_queue, \ + struct rte_mbuf **rx_pkts, uint16_t pkts) \ +{ \ + return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \ +} \ + +NIX_RX_FASTPATH_MODES +#undef R + +static inline void +pick_rx_func(struct rte_eth_dev *eth_dev, + const eth_rx_burst_t rx_burst[2][2][2][2][2][2]) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */ + eth_dev->rx_pkt_burst = rx_burst + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)]; +} + +void +otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) +{ + const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name, + +NIX_RX_FASTPATH_MODES +#undef R + }; + + pick_rx_func(eth_dev, nix_eth_rx_burst); + + rte_mb(); +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 7dc34d705..32343c27b 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -15,7 +15,10 @@ PTYPE_TUNNEL_ARRAY_SZ) *\ sizeof(uint16_t)) +#define NIX_RX_OFFLOAD_NONE (0) +#define NIX_RX_OFFLOAD_RSS_F BIT(0) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2) #define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) @@ -30,4 +33,248 @@ struct otx2_timesync_info { uint8_t rx_ready; } __rte_cache_aligned; +union mbuf_initializer { + struct { + uint16_t data_off; + uint16_t refcnt; + uint16_t nb_segs; + uint16_t port; + } fields; + uint64_t value; +}; + +static __rte_always_inline void +otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf, + struct otx2_timesync_info *tstamp, const uint16_t flag) +{ + if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) && + mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC && + (mbuf->data_off == RTE_PKTMBUF_HEADROOM + + NIX_TIMESYNC_RX_OFFSET)) { + uint64_t *tstamp_ptr; + + /* Deal with rx timestamp */ + tstamp_ptr = rte_pktmbuf_mtod_offset(mbuf, uint64_t *, + -NIX_TIMESYNC_RX_OFFSET); + mbuf->timestamp = rte_be_to_cpu_64(*tstamp_ptr); + tstamp->rx_tstamp = mbuf->timestamp; + tstamp->rx_ready = 1; + mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST + | PKT_RX_TIMESTAMP; + } +} + +static __rte_always_inline uint64_t +nix_clear_data_off(uint64_t oldval) +{ + union mbuf_initializer mbuf_init = { .value = oldval }; + + mbuf_init.fields.data_off = 0; + return mbuf_init.value; +} + +static __rte_always_inline struct rte_mbuf * +nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off) +{ + rte_iova_t buff; + + /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */ + buff = *((rte_iova_t *)((uint64_t *)cq + 9)); + return (struct rte_mbuf *)(buff - data_off); +} + + +static __rte_always_inline uint32_t +nix_ptype_get(const void * const lookup_mem, const uint64_t in) +{ + const uint16_t * const ptype = lookup_mem; + const uint16_t lg_lf_le = (in & 0xFFF000000000000) >> 48; + const uint16_t tu_l2 = ptype[(in & 0x000FFF000000000) >> 36]; + const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lg_lf_le]; + + return (il4_tu << PTYPE_WIDTH) | tu_l2; +} + +static __rte_always_inline uint32_t +nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in) +{ + const uint32_t * const ol_flags = (const uint32_t *) + ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ); + + return ol_flags[(in & 0xfff00000) >> 20]; +} + +static inline uint64_t +nix_update_match_id(const uint16_t match_id, uint64_t ol_flags, + struct rte_mbuf *mbuf) +{ + /* There is no separate bit to check match_id + * is valid or not? and no flag to identify it is an + * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK + * action. The former case addressed through 0 being invalid + * value and inc/dec match_id pair when MARK is activated. + * The later case addressed through defining + * OTX2_FLOW_MARK_DEFAULT as value for + * RTE_FLOW_ACTION_TYPE_MARK. + * This would translate to not use + * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and + * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id. + * i.e valid mark_id's are from + * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2 + */ + if (likely(match_id)) { + ol_flags |= PKT_RX_FDIR; + if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) { + ol_flags |= PKT_RX_FDIR_ID; + mbuf->hash.fdir.hi = match_id - 1; + } + } + + return ol_flags; +} + +static __rte_always_inline void +otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, + struct rte_mbuf *mbuf, const void *lookup_mem, + const uint64_t val, const uint16_t flag) +{ + const struct nix_rx_parse_s *rx = + (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1); + const uint64_t w1 = *(const uint64_t *)rx; + const uint16_t len = rx->pkt_lenm1 + 1; + uint16_t ol_flags = 0; + + /* Mark mempool obj as "get" as it is alloc'ed by NIX */ + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + + if (flag & NIX_RX_OFFLOAD_PTYPE_F) + mbuf->packet_type = nix_ptype_get(lookup_mem, w1); + else + mbuf->packet_type = 0; + + if (flag & NIX_RX_OFFLOAD_RSS_F) { + mbuf->hash.rss = tag; + ol_flags |= PKT_RX_RSS_HASH; + } + + if (flag & NIX_RX_OFFLOAD_CHECKSUM_F) + ol_flags |= nix_rx_olflags_get(lookup_mem, w1); + + if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) { + if (rx->vtag0_gone) { + ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; + mbuf->vlan_tci = rx->vtag0_tci; + } + if (rx->vtag1_gone) { + ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED; + mbuf->vlan_tci_outer = rx->vtag1_tci; + } + } + + if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F) + ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf); + + mbuf->ol_flags = ol_flags; + *(uint64_t *)(&mbuf->rearm_data) = val; + mbuf->pkt_len = len; + + mbuf->data_len = len; +} + +#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F +#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F +#define RSS_F NIX_RX_OFFLOAD_RSS_F +#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F +#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F +#define TS_F NIX_RX_OFFLOAD_TSTAMP_F + +/* [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */ +#define NIX_RX_FASTPATH_MODES \ +R(no_offload, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \ +R(rss, 0, 0, 0, 0, 0, 1, RSS_F) \ +R(ptype, 0, 0, 0, 0, 1, 0, PTYPE_F) \ +R(ptype_rss, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \ +R(cksum, 0, 0, 0, 1, 0, 0, CKSUM_F) \ +R(cksum_rss, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \ +R(cksum_ptype, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \ +R(cksum_ptype_rss, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\ +R(vlan, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \ +R(vlan_rss, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \ +R(vlan_ptype, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \ +R(vlan_ptype_rss, 0, 0, 1, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F)\ +R(vlan_cksum, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \ +R(vlan_cksum_rss, 0, 0, 1, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F)\ +R(vlan_cksum_ptype, 0, 0, 1, 1, 1, 0, \ + RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, \ + RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(mark, 0, 1, 0, 0, 0, 0, MARK_F) \ +R(mark_rss, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \ +R(mark_ptype, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \ +R(mark_ptype_rss, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)\ +R(mark_cksum, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \ +R(mark_cksum_rss, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)\ +R(mark_cksum_ptype, 0, 1, 0, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)\ +R(mark_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, \ + MARK_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(mark_vlan, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \ +R(mark_vlan_rss, 0, 1, 1, 0, 0, 1, MARK_F | RX_VLAN_F | RSS_F)\ +R(mark_vlan_ptype, 0, 1, 1, 0, 1, 0, \ + MARK_F | RX_VLAN_F | PTYPE_F) \ +R(mark_vlan_ptype_rss, 0, 1, 1, 0, 1, 1, \ + MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \ +R(mark_vlan_cksum, 0, 1, 1, 1, 0, 0, \ + MARK_F | RX_VLAN_F | CKSUM_F) \ +R(mark_vlan_cksum_rss, 0, 1, 1, 1, 0, 1, \ + MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \ +R(mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 0, \ + MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, \ + MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts, 1, 0, 0, 0, 0, 0, TS_F) \ +R(ts_rss, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \ +R(ts_ptype, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \ +R(ts_ptype_rss, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)\ +R(ts_cksum, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \ +R(ts_cksum_rss, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)\ +R(ts_cksum_ptype, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)\ +R(ts_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, \ + TS_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts_vlan, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \ +R(ts_vlan_rss, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F)\ +R(ts_vlan_ptype, 1, 0, 1, 0, 1, 0, TS_F | RX_VLAN_F | PTYPE_F)\ +R(ts_vlan_ptype_rss, 1, 0, 1, 0, 1, 1, \ + TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \ +R(ts_vlan_cksum, 1, 0, 1, 1, 0, 0, \ + TS_F | RX_VLAN_F | CKSUM_F) \ +R(ts_vlan_cksum_rss, 1, 0, 1, 1, 0, 1, \ + MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \ +R(ts_vlan_cksum_ptype, 1, 0, 1, 1, 1, 0, \ + TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(ts_vlan_cksum_ptype_rss, 1, 0, 1, 1, 1, 1, \ + TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts_mark, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \ +R(ts_mark_rss, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F)\ +R(ts_mark_ptype, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F)\ +R(ts_mark_ptype_rss, 1, 1, 0, 0, 1, 1, \ + TS_F | MARK_F | PTYPE_F | RSS_F) \ +R(ts_mark_cksum, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F)\ +R(ts_mark_cksum_rss, 1, 1, 0, 1, 0, 1, \ + TS_F | MARK_F | CKSUM_F | RSS_F)\ +R(ts_mark_cksum_ptype, 1, 1, 0, 1, 1, 0, \ + TS_F | MARK_F | CKSUM_F | PTYPE_F) \ +R(ts_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, \ + TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts_mark_vlan, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\ +R(ts_mark_vlan_rss, 1, 1, 1, 0, 0, 1, \ + TS_F | MARK_F | RX_VLAN_F | RSS_F)\ +R(ts_mark_vlan_ptype, 1, 1, 1, 0, 1, 0, \ + TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \ +R(ts_mark_vlan_ptype_rss, 1, 1, 1, 0, 1, 1, \ + TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \ +R(ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 0, \ + TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(ts_mark_vlan_cksum_ptype_rss, 1, 1, 1, 1, 1, 1, \ + TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) + #endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 30 18:06:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55732 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C971F1BC22; Sun, 30 Jun 2019 20:11:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4B77F1B9E5 for ; Sun, 30 Jun 2019 20:09:04 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI54j4027147; Sun, 30 Jun 2019 11:09:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=WUNCM+kaqUGv6JGwApy5LxHSg09U0J8/XD4Xjkml2Gk=; b=ho6ASRaFxUtOEValCEZxLvdV+jzGNINesZAFJN0uq2r9R1K9jubMMd5DTayAffAO3hzT M3FPBK4QVQh1mPUhK/1s7QMJt6Am3BweJ961vtOwIopwiwTbPm/pMq88/A5wlCmqv14Z iqwaGflHSIMWY7GolPlallWiInJKDLsAsizVvCNHz2KJ9Lv7DN1TGsu4C2nYu3Jg5+Zd 1iT1YqvGAmBcXx2VeEMVZs/D/zjqB4RmZxPqrpP4DsZ7zbRo+i0MtU+2B3nS2R1Db19G uRzkKamRCZ+WEx1CgMIovulP62fx4u21C7PmSZ+UgOKs3A3bzLN2x2/zEHUYnA6u/jmO fA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gnx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:09:03 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:02 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:02 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id D47C33F703F; Sun, 30 Jun 2019 11:08:59 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K , Anatoly Burakov CC: Pavan Nikhilesh Date: Sun, 30 Jun 2019 23:36:01 +0530 Message-ID: <20190630180609.36705-50-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 49/57] net/octeontx2: add Rx multi segment version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add multi segment version of packet Receive function. Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 2 + doc/guides/nics/octeontx2.rst | 2 + drivers/net/octeontx2/otx2_rx.c | 25 ++++++++++ drivers/net/octeontx2/otx2_rx.h | 55 +++++++++++++++++++++- 6 files changed, 86 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 211ff93e7..3280cba78 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -24,6 +24,8 @@ Inner RSS = Y VLAN filter = Y Flow control = Y Flow API = Y +Jumbo frame = Y +Scattered Rx = Y VLAN offload = Y QinQ offload = Y Packet type parsing = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 967a3757d..315722e60 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -24,6 +24,7 @@ Inner RSS = Y VLAN filter = Y Flow control = Y Flow API = Y +Jumbo frame = Y VLAN offload = Y QinQ offload = Y Packet type parsing = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 884167c88..17b223221 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -19,6 +19,8 @@ RSS reta update = Y Inner RSS = Y VLAN filter = Y Flow API = Y +Jumbo frame = Y +Scattered Rx = Y VLAN offload = Y QinQ offload = Y Packet type parsing = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index d6082e508..3d9fc5f1d 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -18,6 +18,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Packet type information - Promiscuous mode +- Jumbo frames - SR-IOV VF - Lock-free Tx queue - Multiple queues for TX and RX @@ -28,6 +29,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Port hardware statistics - Link state information - Link flow control +- Scatter-Gather IO support - Debug utilities - Context dump and error interrupt support - IEEE1588 timestamping diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c index 4d5223e10..fca182785 100644 --- a/drivers/net/octeontx2/otx2_rx.c +++ b/drivers/net/octeontx2/otx2_rx.c @@ -92,6 +92,14 @@ otx2_nix_recv_pkts_ ## name(void *rx_queue, \ { \ return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \ } \ + \ +static uint16_t __rte_noinline __hot \ +otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \ + struct rte_mbuf **rx_pkts, uint16_t pkts) \ +{ \ + return nix_recv_pkts(rx_queue, rx_pkts, pkts, \ + (flags) | NIX_RX_MULTI_SEG_F); \ +} \ NIX_RX_FASTPATH_MODES #undef R @@ -115,15 +123,32 @@ pick_rx_func(struct rte_eth_dev *eth_dev, void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name, +NIX_RX_FASTPATH_MODES +#undef R + }; + + const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name, + NIX_RX_FASTPATH_MODES #undef R }; pick_rx_func(eth_dev, nix_eth_rx_burst); + if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) + pick_rx_func(eth_dev, nix_eth_rx_burst_mseg); + + /* Copy multi seg version with no offload for tear down sequence */ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev->rx_pkt_burst_no_offload = + nix_eth_rx_burst_mseg[0][0][0][0][0][0]; rte_mb(); } diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 32343c27b..167badd46 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -23,6 +23,11 @@ #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) +/* Flags to control cqe_to_mbuf conversion function. + * Defining it from backwards to denote its been + * not used as offload flags to pick function + */ +#define NIX_RX_MULTI_SEG_F BIT(15) #define NIX_TIMESYNC_RX_OFFSET 8 struct otx2_timesync_info { @@ -133,6 +138,51 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags, return ol_flags; } +static __rte_always_inline void +nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx, + struct rte_mbuf *mbuf, uint64_t rearm) +{ + const rte_iova_t *iova_list; + struct rte_mbuf *head; + const rte_iova_t *eol; + uint8_t nb_segs; + uint64_t sg; + + sg = *(const uint64_t *)(rx + 1); + nb_segs = (sg >> 48) & 0x3; + mbuf->nb_segs = nb_segs; + mbuf->data_len = sg & 0xFFFF; + sg = sg >> 16; + + eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1)); + /* Skip SG_S and first IOVA*/ + iova_list = ((const rte_iova_t *)(rx + 1)) + 2; + nb_segs--; + + rearm = rearm & ~0xFFFF; + + head = mbuf; + while (nb_segs) { + mbuf->next = ((struct rte_mbuf *)*iova_list) - 1; + mbuf = mbuf->next; + + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + + mbuf->data_len = sg & 0xFFFF; + sg = sg >> 16; + *(uint64_t *)(&mbuf->rearm_data) = rearm; + nb_segs--; + iova_list++; + + if (!nb_segs && (iova_list + 1 < eol)) { + sg = *(const uint64_t *)(iova_list); + nb_segs = (sg >> 48) & 0x3; + head->nb_segs += nb_segs; + iova_list = (const rte_iova_t *)(iova_list + 1); + } + } +} + static __rte_always_inline void otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, struct rte_mbuf *mbuf, const void *lookup_mem, @@ -178,7 +228,10 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag, *(uint64_t *)(&mbuf->rearm_data) = val; mbuf->pkt_len = len; - mbuf->data_len = len; + if (flag & NIX_RX_MULTI_SEG_F) + nix_cqe_xtract_mseg(rx, mbuf, val); + else + mbuf->data_len = len; } #define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F From patchwork Sun Jun 30 18:06:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55723 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60A8B1B9FA; Sun, 30 Jun 2019 20:12:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 41D231B9CF for ; Sun, 30 Jun 2019 20:09:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI54j5027147; Sun, 30 Jun 2019 11:09:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=LCTIHkf5d8CaoI2cZICYnM6UO/yzbUQweAhJqmzTsFo=; b=mZI3NAeTDqKD2jj0+X/KoqZOv7727IaD2z4MpK1iAIt8mTNsIInGh87VA8FACC5q2XOR jHiBSWl9wl5gHoqgqGiMYeV6tF7/QwVKYhGrglLGRvccAuZbZmWSRDpx+3uggeO0bFqQ ihWoAiKikAoBeZlMN/CLaWCyLwLHcMRSplSb/bZwsg9Kya5ghvBW8ksXq74gZB164fxV lLkyNYcjdI1L+WKI5QHZIgnWbVc86CKzQBnVLitE60RiDfEcI1LVPNlJpTDsydakt7Sm a0L1995TMoDcOmmAnpltOjAmXX3SvlwMlWukeK6QeO8Hh8MXMRDq6q6NOgNaiVlbHDNF ow== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gp2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:09:06 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:05 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:05 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 64A3D3F7045; Sun, 30 Jun 2019 11:09:03 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic Date: Sun, 30 Jun 2019 23:36:02 +0530 Message-ID: <20190630180609.36705-51-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 50/57] net/octeontx2: add Rx vector version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add vector version of packet Receive function. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 2 + drivers/net/octeontx2/otx2_rx.c | 259 +++++++++++++++++++++++++++++- 4 files changed, 262 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 3d9fc5f1d..9d6596ad8 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -30,6 +30,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Link state information - Link flow control - Scatter-Gather IO support +- Vector Poll mode driver - Debug utilities - Context dump and error interrupt support - IEEE1588 timestamping diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 3e25d2ad4..a5f125655 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -14,6 +14,7 @@ CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2 CFLAGS += -O3 +CFLAGS += -flax-vector-conversions ifneq ($(CONFIG_RTE_ARCH_64),y) CFLAGS += -Wno-int-to-pointer-cast diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 975b2e715..9d151f88d 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -24,6 +24,8 @@ sources = files('otx2_rx.c', deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2'] +cflags += ['-flax-vector-conversions'] + extra_flags = [] # This integrated controller runs only on a arm64 machine, remove 32bit warnings if not dpdk_conf.get('RTE_ARCH_64') diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c index fca182785..deefe9588 100644 --- a/drivers/net/octeontx2/otx2_rx.c +++ b/drivers/net/octeontx2/otx2_rx.c @@ -84,6 +84,239 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_pkts; } +#if defined(RTE_ARCH_ARM64) + +static __rte_always_inline uint64_t +nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f) +{ + if (w2 & BIT_ULL(21) /* vtag0_gone */) { + ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; + *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5); + } + + return ol_flags; +} + +static __rte_always_inline uint64_t +nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf) +{ + if (w2 & BIT_ULL(23) /* vtag1_gone */) { + ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED; + mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48); + } + + return ol_flags; +} + +static __rte_always_inline uint16_t +nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t pkts, const uint16_t flags) +{ + struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0; + uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23; + const uint64_t mbuf_initializer = rxq->mbuf_initializer; + const uint64x2_t data_off = vdupq_n_u64(rxq->data_off); + uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3; + uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer); + uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer); + uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer); + uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer); + struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3; + const uint16_t *lookup_mem = rxq->lookup_mem; + const uint32_t qmask = rxq->qmask; + const uint64_t wdata = rxq->wdata; + const uintptr_t desc = rxq->desc; + uint8x16_t f0, f1, f2, f3; + uint32_t head = rxq->head; + + pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask); + /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */ + pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP); + + while (packets < pkts) { + /* Get the CQ pointers, since the ring size is multiple of + * 4, We can avoid checking the wrap around of head + * value after the each access unlike scalar version. + */ + const uintptr_t cq0 = desc + CQE_SZ(head); + + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8))); + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9))); + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10))); + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11))); + + /* Get NIX_RX_SG_S for size and buffer pointer */ + cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64)); + cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64)); + cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64)); + cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64)); + + /* Extract mbuf from NIX_RX_SG_S */ + mbuf01 = vzip2q_u64(cq0_w8, cq1_w8); + mbuf23 = vzip2q_u64(cq2_w8, cq3_w8); + mbuf01 = vqsubq_u64(mbuf01, data_off); + mbuf23 = vqsubq_u64(mbuf23, data_off); + + /* Move mbufs to scalar registers for future use */ + mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0); + mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1); + mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0); + mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1); + + /* Mask to get packet len from NIX_RX_SG_S */ + const uint8x16_t shuf_msk = { + 0xFF, 0xFF, /* pkt_type set as unknown */ + 0xFF, 0xFF, /* pkt_type set as unknown */ + 0, 1, /* octet 1~0, low 16 bits pkt_len */ + 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */ + 0, 1, /* octet 1~0, 16 bits data_len */ + 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF + }; + + /* Form the rx_descriptor_fields1 with pkt_len and data_len */ + f0 = vqtbl1q_u8(cq0_w8, shuf_msk); + f1 = vqtbl1q_u8(cq1_w8, shuf_msk); + f2 = vqtbl1q_u8(cq2_w8, shuf_msk); + f3 = vqtbl1q_u8(cq3_w8, shuf_msk); + + /* Load CQE word0 and word 1 */ + uint64x2_t cq0_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0))); + uint64x2_t cq1_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1))); + uint64x2_t cq2_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2))); + uint64x2_t cq3_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3))); + + if (flags & NIX_RX_OFFLOAD_RSS_F) { + /* Fill rss in the rx_descriptor_fields1 */ + f0 = vsetq_lane_u32(vgetq_lane_u32(cq0_w0, 0), f0, 3); + f1 = vsetq_lane_u32(vgetq_lane_u32(cq1_w0, 0), f1, 3); + f2 = vsetq_lane_u32(vgetq_lane_u32(cq2_w0, 0), f2, 3); + f3 = vsetq_lane_u32(vgetq_lane_u32(cq3_w0, 0), f3, 3); + ol_flags0 = PKT_RX_RSS_HASH; + ol_flags1 = PKT_RX_RSS_HASH; + ol_flags2 = PKT_RX_RSS_HASH; + ol_flags3 = PKT_RX_RSS_HASH; + } else { + ol_flags0 = 0; ol_flags1 = 0; + ol_flags2 = 0; ol_flags3 = 0; + } + + if (flags & NIX_RX_OFFLOAD_PTYPE_F) { + /* Fill packet_type in the rx_descriptor_fields1 */ + f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq0_w0, 1)), f0, 0); + f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq1_w0, 1)), f1, 0); + f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq2_w0, 1)), f2, 0); + f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq3_w0, 1)), f3, 0); + } + + if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) { + ol_flags0 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq0_w0, 1)); + ol_flags1 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq1_w0, 1)); + ol_flags2 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq2_w0, 1)); + ol_flags3 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq3_w0, 1)); + } + + if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) { + uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16); + uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16); + uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16); + uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16); + + ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0); + ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1); + ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2); + ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3); + + ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0); + ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1); + ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2); + ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3); + } + + if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) { + ol_flags0 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0); + ol_flags1 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1); + ol_flags2 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2); + ol_flags3 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3); + } + + /* Form rearm_data with ol_flags */ + rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1); + rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1); + rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1); + rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1); + + /* Update rx_descriptor_fields1 */ + vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0); + vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1); + vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2); + vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3); + + /* Update rearm_data */ + vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0); + vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1); + vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2); + vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3); + + /* Store the mbufs to rx_pkts */ + vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01); + vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23); + + /* Prefetch mbufs */ + otx2_prefetch_store_keep(mbuf0); + otx2_prefetch_store_keep(mbuf1); + otx2_prefetch_store_keep(mbuf2); + otx2_prefetch_store_keep(mbuf3); + + /* Mark mempool obj as "get" as it is alloc'ed by NIX */ + __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1); + __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1); + __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1); + __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1); + + /* Advance head pointer and packets */ + head += NIX_DESCS_PER_LOOP; head &= qmask; + packets += NIX_DESCS_PER_LOOP; + } + + rxq->head = head; + rxq->available -= packets; + + rte_cio_wmb(); + /* Free all the CQs that we've processed */ + otx2_write64((rxq->wdata | packets), rxq->cq_door); + + return packets; +} + +#else + +static inline uint16_t +nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t pkts, const uint16_t flags) +{ + RTE_SET_USED(rx_queue); + RTE_SET_USED(rx_pkts); + RTE_SET_USED(pkts); + RTE_SET_USED(flags); + + return 0; +} + +#endif #define R(name, f5, f4, f3, f2, f1, f0, flags) \ static uint16_t __rte_noinline __hot \ @@ -100,6 +333,16 @@ otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \ return nix_recv_pkts(rx_queue, rx_pkts, pkts, \ (flags) | NIX_RX_MULTI_SEG_F); \ } \ + \ +static uint16_t __rte_noinline __hot \ +otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \ + struct rte_mbuf **rx_pkts, uint16_t pkts) \ +{ \ + /* TSTMP is not supported by vector */ \ + if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \ + return 0; \ + return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \ +} \ NIX_RX_FASTPATH_MODES #undef R @@ -141,7 +384,21 @@ NIX_RX_FASTPATH_MODES #undef R }; - pick_rx_func(eth_dev, nix_eth_rx_burst); + const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name, + +NIX_RX_FASTPATH_MODES +#undef R + }; + + /* For PTP enabled, scalar rx function should be chosen as most of the + * PTP apps are implemented to rx burst 1 pkt. + */ + if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) + pick_rx_func(eth_dev, nix_eth_rx_burst); + else + pick_rx_func(eth_dev, nix_eth_rx_vec_burst); if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) pick_rx_func(eth_dev, nix_eth_rx_burst_mseg); From patchwork Sun Jun 30 18:06:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55728 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6ECBA1BC7A; Sun, 30 Jun 2019 20:12:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 14CC31B9D4 for ; Sun, 30 Jun 2019 20:09:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6dmS028335; Sun, 30 Jun 2019 11:09:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=TlTL6mghCQ5fQcMC7Bcl/tlZulviE3R09iqulF7Bn/U=; b=Bd0E3BBcE9JIVwod1GGstl3uJ74IqhS8Mohg625oKC/dZ4h6A3t5qIr7+hSQ5rNuXTG4 o7OkkFf68cQeLrn1nF3V5dOK2KD//cr+RbX5TEEegAPbDysICSw1joPduMI+P4oc1XAa 5szZumv7dfhI+RymWXeODRotNN/HUBuVHm1OdwfyJgn7KlQjJ85bPudN/AtBp0sVnbH0 NbusrALJBqXCzD1jb/iBjWOxDIwE/QOj4Pp55Zapk/lra+gVxWKWmwCpW89bosRpE62t kiWQ0Uw7Rz+gthFfIOYv4oDJJ6UPaJ5EokSXSHnCa0dBY+j6x5DjMm2nQPkKDweOzeAX pQ== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gp7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:09:09 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:08 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 538D03F7040; Sun, 30 Jun 2019 11:09:06 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Pavan Nikhilesh , Harman Kalra Date: Sun, 30 Jun 2019 23:36:03 +0530 Message-ID: <20190630180609.36705-52-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 51/57] net/octeontx2: add Tx burst support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Tx burst support. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 5 + doc/guides/nics/features/octeontx2_vec.ini | 5 + doc/guides/nics/features/octeontx2_vf.ini | 5 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 6 - drivers/net/octeontx2/otx2_ethdev.h | 1 + drivers/net/octeontx2/otx2_tx.c | 94 ++++++++ drivers/net/octeontx2/otx2_tx.h | 261 +++++++++++++++++++++ 10 files changed, 374 insertions(+), 6 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_tx.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 3280cba78..1856d9924 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y @@ -28,6 +29,10 @@ Jumbo frame = Y Scattered Rx = Y VLAN offload = Y QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 315722e60..053fca288 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y @@ -27,6 +28,10 @@ Flow API = Y Jumbo frame = Y VLAN offload = Y QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 17b223221..bef451d01 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,6 +11,7 @@ Link status = Y Link status event = Y Runtime Rx queue setup = Y Runtime Tx queue setup = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y RSS hash = Y @@ -23,6 +24,10 @@ Jumbo frame = Y Scattered Rx = Y VLAN offload = Y QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 9d6596ad8..90ca4e2d2 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -25,6 +25,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Receiver Side Scaling (RSS) - MAC/VLAN filtering - Generic flow API +- Inner and Outer Checksum offload - VLAN/QinQ stripping and insertion - Port hardware statistics - Link state information diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index a5f125655..c187d2555 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_rx.c \ + otx2_tx.c \ otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 9d151f88d..94bf09a78 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files('otx2_rx.c', + 'otx2_tx.c', 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 1f8a22300..44753cbf5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -14,12 +14,6 @@ #include "otx2_ethdev.h" -static inline void -otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) -{ - RTE_SET_USED(eth_dev); -} - static inline uint64_t nix_get_rx_offload_capa(struct otx2_eth_dev *dev) { diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 22cf86981..1f9323fe3 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -484,6 +484,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, /* Rx and Tx routines */ void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev); +void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev); void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); /* Timesync - PTP routines */ diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c new file mode 100644 index 000000000..16d69b74f --- /dev/null +++ b/drivers/net/octeontx2/otx2_tx.c @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" + +#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \ + /* Cached value is low, Update the fc_cache_pkts */ \ + if (unlikely((txq)->fc_cache_pkts < (pkts))) { \ + /* Multiply with sqe_per_sqb to express in pkts */ \ + (txq)->fc_cache_pkts = \ + ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \ + (txq)->sqes_per_sqb_log2; \ + /* Check it again for the room */ \ + if (unlikely((txq)->fc_cache_pkts < (pkts))) \ + return 0; \ + } \ +} while (0) + + +static __rte_always_inline uint16_t +nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, uint64_t *cmd, const uint16_t flags) +{ + struct otx2_eth_txq *txq = tx_queue; uint16_t i; + const rte_iova_t io_addr = txq->io_addr; + void *lmt_addr = txq->lmt_addr; + + NIX_XMIT_FC_OR_RETURN(txq, pkts); + + otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags)); + + /* Lets commit any changes in the packet */ + rte_cio_wmb(); + + for (i = 0; i < pkts; i++) { + otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags); + /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */ + otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], + tx_pkts[i]->ol_flags, 4, flags); + otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags); + } + + /* Reduce the cached count */ + txq->fc_cache_pkts -= pkts; + + return pkts; +} + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ + struct rte_mbuf **tx_pkts, uint16_t pkts) \ +{ \ + uint64_t cmd[sz]; \ + \ + return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \ +} + +NIX_TX_FASTPATH_MODES +#undef T + +static inline void +pick_tx_func(struct rte_eth_dev *eth_dev, + const eth_tx_burst_t tx_burst[2][2][2][2][2]) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */ + eth_dev->tx_pkt_burst = tx_burst + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; +} + +void +otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) +{ + const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name, + +NIX_TX_FASTPATH_MODES +#undef T + }; + + pick_tx_func(eth_dev, nix_eth_tx_burst); + + rte_mb(); +} diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index 4d0993f87..db4c1f70f 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -25,4 +25,265 @@ #define NIX_TX_NEED_EXT_HDR \ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F) +/* Function to determine no of tx subdesc required in case ext + * sub desc is enabled. + */ +static __rte_always_inline int +otx2_nix_tx_ext_subs(const uint16_t flags) +{ + return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 : + ((flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) ? 1 : 0); +} + +static __rte_always_inline void +otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, + const uint64_t ol_flags, const uint16_t no_segdw, + const uint16_t flags) +{ + if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { + struct nix_send_mem_s *send_mem; + uint16_t off = (no_segdw - 1) << 1; + + send_mem = (struct nix_send_mem_s *)(cmd + off); + if (flags & NIX_TX_MULTI_SEG_F) + /* Retrieving the default desc values */ + cmd[off] = send_mem_desc[6]; + + /* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp + * should not be updated at tx tstamp registered address, rather + * a dummy address which is eight bytes ahead would be updated + */ + send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] + + !(ol_flags & PKT_TX_IEEE1588_TMST)); + } +} + +static inline void +otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) +{ + struct nix_send_ext_s *send_hdr_ext; + struct nix_send_hdr_s *send_hdr; + uint64_t ol_flags = 0, mask; + union nix_send_hdr_w1_u w1; + union nix_send_sg_s *sg; + + send_hdr = (struct nix_send_hdr_s *)cmd; + if (flags & NIX_TX_NEED_EXT_HDR) { + send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2); + sg = (union nix_send_sg_s *)(cmd + 4); + /* Clear previous markings */ + send_hdr_ext->w0.lso = 0; + send_hdr_ext->w1.u = 0; + } else { + sg = (union nix_send_sg_s *)(cmd + 2); + } + + if (flags & NIX_TX_NEED_SEND_HDR_W1) { + ol_flags = m->ol_flags; + w1.u = 0; + } + + if (!(flags & NIX_TX_MULTI_SEG_F)) { + send_hdr->w0.total = m->data_len; + send_hdr->w0.aura = + npa_lf_aura_handle_to_aura(m->pool->pool_id); + } + + /* + * L3type: 2 => IPV4 + * 3 => IPV4 with csum + * 4 => IPV6 + * L3type and L3ptr needs to be set for either + * L3 csum or L4 csum or LSO + * + */ + + if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) && + (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) { + const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM); + const uint8_t ol3type = + ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) + + !!(ol_flags & PKT_TX_OUTER_IP_CKSUM); + + /* Outer L3 */ + w1.ol3type = ol3type; + mask = 0xffffull << ((!!ol3type) << 4); + w1.ol3ptr = ~mask & m->outer_l2_len; + w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len); + + /* Outer L4 */ + w1.ol4type = csum + (csum << 1); + + /* Inner L3 */ + w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_IPV6)) << 2); + w1.il3ptr = w1.ol4ptr + m->l2_len; + w1.il4ptr = w1.il3ptr + m->l3_len; + /* Increment it by 1 if it is IPV4 as 3 is with csum */ + w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM); + + /* Inner L4 */ + w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52; + + /* In case of no tunnel header use only + * shift IL3/IL4 fields a bit to use + * OL3/OL4 for header checksum + */ + mask = !ol3type; + w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) | + ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4)); + + } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) { + const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM); + const uint8_t outer_l2_len = m->outer_l2_len; + + /* Outer L3 */ + w1.ol3ptr = outer_l2_len; + w1.ol4ptr = outer_l2_len + m->outer_l3_len; + /* Increment it by 1 if it is IPV4 as 3 is with csum */ + w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) + + !!(ol_flags & PKT_TX_OUTER_IP_CKSUM); + + /* Outer L4 */ + w1.ol4type = csum + (csum << 1); + + } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) { + const uint8_t l2_len = m->l2_len; + + /* Always use OLXPTR and OLXTYPE when only + * when one header is present + */ + + /* Inner L3 */ + w1.ol3ptr = l2_len; + w1.ol4ptr = l2_len + m->l3_len; + /* Increment it by 1 if it is IPV4 as 3 is with csum */ + w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_IPV6)) << 2) + + !!(ol_flags & PKT_TX_IP_CKSUM); + + /* Inner L4 */ + w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52; + } + + if (flags & NIX_TX_NEED_EXT_HDR && + flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) { + send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN); + /* HW will update ptr after vlan0 update */ + send_hdr_ext->w1.vlan1_ins_ptr = 12; + send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci; + + send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ); + /* 2B before end of l2 header */ + send_hdr_ext->w1.vlan0_ins_ptr = 12; + send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer; + } + + if (flags & NIX_TX_NEED_SEND_HDR_W1) + send_hdr->w1.u = w1.u; + + if (!(flags & NIX_TX_MULTI_SEG_F)) { + sg->seg1_size = m->data_len; + *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m); + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + /* Set don't free bit if reference count > 1 */ + if (rte_pktmbuf_prefree_seg(m) == NULL) + send_hdr->w0.df = 1; /* SET DF */ + } + /* Mark mempool object as "put" since it is freed by NIX */ + if (!send_hdr->w0.df) + __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + } +} + + +static __rte_always_inline void +otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr, + const rte_iova_t io_addr, const uint32_t flags) +{ + uint64_t lmt_status; + + do { + otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags)); + lmt_status = otx2_lmt_submit(io_addr); + } while (lmt_status == 0); +} + + +#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F +#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F +#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F +#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F +#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F + +/* [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */ +#define NIX_TX_FASTPATH_MODES \ +T(no_offload, 0, 0, 0, 0, 0, 4, \ + NIX_TX_OFFLOAD_NONE) \ +T(l3l4csum, 0, 0, 0, 0, 1, 4, \ + L3L4CSUM_F) \ +T(ol3ol4csum, 0, 0, 0, 1, 0, 4, \ + OL3OL4CSUM_F) \ +T(ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 4, \ + OL3OL4CSUM_F | L3L4CSUM_F) \ +T(vlan, 0, 0, 1, 0, 0, 6, \ + VLAN_F) \ +T(vlan_l3l4csum, 0, 0, 1, 0, 1, 6, \ + VLAN_F | L3L4CSUM_F) \ +T(vlan_ol3ol4csum, 0, 0, 1, 1, 0, 6, \ + VLAN_F | OL3OL4CSUM_F) \ +T(vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 6, \ + VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(noff, 0, 1, 0, 0, 0, 4, \ + NOFF_F) \ +T(noff_l3l4csum, 0, 1, 0, 0, 1, 4, \ + NOFF_F | L3L4CSUM_F) \ +T(noff_ol3ol4csum, 0, 1, 0, 1, 0, 4, \ + NOFF_F | OL3OL4CSUM_F) \ +T(noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 4, \ + NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(noff_vlan, 0, 1, 1, 0, 0, 6, \ + NOFF_F | VLAN_F) \ +T(noff_vlan_l3l4csum, 0, 1, 1, 0, 1, 6, \ + NOFF_F | VLAN_F | L3L4CSUM_F) \ +T(noff_vlan_ol3ol4csum, 0, 1, 1, 1, 0, 6, \ + NOFF_F | VLAN_F | OL3OL4CSUM_F) \ +T(noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 6, \ + NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts, 1, 0, 0, 0, 0, 8, \ + TSP_F) \ +T(ts_l3l4csum, 1, 0, 0, 0, 1, 8, \ + TSP_F | L3L4CSUM_F) \ +T(ts_ol3ol4csum, 1, 0, 0, 1, 0, 8, \ + TSP_F | OL3OL4CSUM_F) \ +T(ts_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 8, \ + TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts_vlan, 1, 0, 1, 0, 0, 8, \ + TSP_F | VLAN_F) \ +T(ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 8, \ + TSP_F | VLAN_F | L3L4CSUM_F) \ +T(ts_vlan_ol3ol4csum, 1, 0, 1, 1, 0, 8, \ + TSP_F | VLAN_F | OL3OL4CSUM_F) \ +T(ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 8, \ + TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts_noff, 1, 1, 0, 0, 0, 8, \ + TSP_F | NOFF_F) \ +T(ts_noff_l3l4csum, 1, 1, 0, 0, 1, 8, \ + TSP_F | NOFF_F | L3L4CSUM_F) \ +T(ts_noff_ol3ol4csum, 1, 1, 0, 1, 0, 8, \ + TSP_F | NOFF_F | OL3OL4CSUM_F) \ +T(ts_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 1, 8, \ + TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts_noff_vlan, 1, 1, 1, 0, 0, 8, \ + TSP_F | NOFF_F | VLAN_F) \ +T(ts_noff_vlan_l3l4csum, 1, 1, 1, 0, 1, 8, \ + TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \ +T(ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 0, 8, \ + TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \ +T(ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 8, \ + TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) + #endif /* __OTX2_TX_H__ */ From patchwork Sun Jun 30 18:06:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55733 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C2E981BC7F; Sun, 30 Jun 2019 20:12:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id C1A361B9D4 for ; Sun, 30 Jun 2019 20:09:13 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6of6016863 for ; Sun, 30 Jun 2019 11:09:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=SuHTv0e6vBnQ85j5drOFrnixhqGmOTINCuJxEcj8O7M=; b=Trmf9v3P4mOA8WOo/faDW01quzQN8+VlOT6YZbP7DsBEoHcbt3L57muOZWyD4rxIJbjW Ce2JOTq4p0PCuq1MltfeplNsTKARYTMSqtw6SeyrSTkFhc2FYHwQViif97SBmk9X/FZ4 BhiShMa18x1z+YdIdddWacoHpUyphxttjygiqD0fHhLMMX7wTo2QYTapLS4R8a40xVcw iB4X4PE8UMxgmxKGl1YlmDE64QCV8vTsfCtk5hfzNc7XmgJlK6HzRUZgXouUP8rrWqR/ DqmZ8Lf+yMtNNTAkyXY4jiKTYSRYsl354R4NKeqWLkKx8sEjwAegbFel1n0b+DnX7GI5 kA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yfk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:09:13 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:11 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C592F3F703F; Sun, 30 Jun 2019 11:09:09 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Pavan Nikhilesh Date: Sun, 30 Jun 2019 23:36:04 +0530 Message-ID: <20190630180609.36705-53-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 52/57] net/octeontx2: add Tx multi segment version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add multi segment version of packet Transmit function. Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_tx.c | 58 +++++++++++++++++++++ drivers/net/octeontx2/otx2_tx.h | 81 +++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 1f9323fe3..f39fdfa1f 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -89,6 +89,10 @@ #define NIX_TX_NB_SEG_MAX 9 #endif +#define NIX_TX_MSEG_SG_DWORDS \ + ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \ + + NIX_TX_NB_SEG_MAX) + /* Apply BP when CQ is 75% full */ #define NIX_CQ_BP_LEVEL (25 * 256 / 100) diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 16d69b74f..0ac5ea652 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -49,6 +49,37 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return pkts; } +static __rte_always_inline uint16_t +nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, uint64_t *cmd, const uint16_t flags) +{ + struct otx2_eth_txq *txq = tx_queue; uint64_t i; + const rte_iova_t io_addr = txq->io_addr; + void *lmt_addr = txq->lmt_addr; + uint16_t segdw; + + NIX_XMIT_FC_OR_RETURN(txq, pkts); + + otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags)); + + /* Lets commit any changes in the packet */ + rte_cio_wmb(); + + for (i = 0; i < pkts; i++) { + otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags); + segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags); + otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], + tx_pkts[i]->ol_flags, segdw, + flags); + otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw); + } + + /* Reduce the cached count */ + txq->fc_cache_pkts -= pkts; + + return pkts; +} + #define T(name, f4, f3, f2, f1, f0, sz, flags) \ static uint16_t __rte_noinline __hot \ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ @@ -62,6 +93,20 @@ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ NIX_TX_FASTPATH_MODES #undef T +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \ + struct rte_mbuf **tx_pkts, uint16_t pkts) \ +{ \ + uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ + \ + return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \ + (flags) | NIX_TX_MULTI_SEG_F); \ +} + +NIX_TX_FASTPATH_MODES +#undef T + static inline void pick_tx_func(struct rte_eth_dev *eth_dev, const eth_tx_burst_t tx_burst[2][2][2][2][2]) @@ -80,15 +125,28 @@ pick_tx_func(struct rte_eth_dev *eth_dev, void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = { #define T(name, f4, f3, f2, f1, f0, sz, flags) \ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name, +NIX_TX_FASTPATH_MODES +#undef T + }; + + const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name, + NIX_TX_FASTPATH_MODES #undef T }; pick_tx_func(eth_dev, nix_eth_tx_burst); + if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) + pick_tx_func(eth_dev, nix_eth_tx_burst_mseg); + rte_mb(); } diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index db4c1f70f..b75a220ea 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -212,6 +212,87 @@ otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr, } while (lmt_status == 0); } +static __rte_always_inline uint16_t +otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) +{ + struct nix_send_hdr_s *send_hdr; + union nix_send_sg_s *sg; + struct rte_mbuf *m_next; + uint64_t *slist, sg_u; + uint64_t nb_segs; + uint64_t segdw; + uint8_t off, i; + + send_hdr = (struct nix_send_hdr_s *)cmd; + send_hdr->w0.total = m->pkt_len; + send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id); + + if (flags & NIX_TX_NEED_EXT_HDR) + off = 2; + else + off = 0; + + sg = (union nix_send_sg_s *)&cmd[2 + off]; + sg_u = sg->u; + slist = &cmd[3 + off]; + + i = 0; + nb_segs = m->nb_segs; + + /* Fill mbuf segments */ + do { + m_next = m->next; + sg_u = sg_u | ((uint64_t)m->data_len << (i << 4)); + *slist = rte_mbuf_data_iova(m); + /* Set invert df if reference count > 1 */ + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) + sg_u |= + ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) << + (i + 55)); + /* Mark mempool object as "put" since it is freed by NIX */ + if (!(sg_u & (1ULL << (i + 55)))) { + m->next = NULL; + __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + } + slist++; + i++; + nb_segs--; + if (i > 2 && nb_segs) { + i = 0; + /* Next SG subdesc */ + *(uint64_t *)slist = sg_u & 0xFC00000000000000; + sg->u = sg_u; + sg->segs = 3; + sg = (union nix_send_sg_s *)slist; + sg_u = sg->u; + slist++; + } + m = m_next; + } while (nb_segs); + + sg->u = sg_u; + sg->segs = i; + segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off]; + /* Roundup extra dwords to multiple of 2 */ + segdw = (segdw >> 1) + (segdw & 0x1); + /* Default dwords */ + segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F); + send_hdr->w0.sizem1 = segdw - 1; + + return segdw; +} + +static __rte_always_inline void +otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr, + rte_iova_t io_addr, uint16_t segdw) +{ + uint64_t lmt_status; + + do { + otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw); + lmt_status = otx2_lmt_submit(io_addr); + } while (lmt_status == 0); +} #define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F #define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F From patchwork Sun Jun 30 18:06:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55722 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 077B21BAC5; Sun, 30 Jun 2019 20:12:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EAA4F1B9D4 for ; Sun, 30 Jun 2019 20:09:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6414016301 for ; Sun, 30 Jun 2019 11:09:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nU0jLGdRQ5mVVEbMQHWirkjBxe2PnFmruCk+dz1oj+4=; b=khtrAm83ouLoSMlnYzmf6SC1G9/WSlrqnPc7lifL2vJH7SksBLj+ciUQzJxsJmYfBxUQ 8NMxNLCnjH1citVAnfyiZNDSr72gdTcxXzKXTRpbOMgFs6hF1QQuBAfwK2J+Q0MnfZny Xbyp1JW1AlQj+4HtfA5ycXUj1BSaUFv6WNMK4QVjyaqZpRidspnuR5+Ux/rBXU3Ju4J1 DtQoxoecctbPR+t5s6XvKAsoigm3/FUcYfXOXNnT4CH2yKbhWC4zLPtwAhvVYJopV8GE XBcE/iQQNzlAUpVuYGE3B65mDTUAWmlsFX14mPgFVGa+RYC8QJzN9xvy4tKQ+pB2Ya0P nA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yfs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:09:16 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:14 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:14 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 9BD223F703F; Sun, 30 Jun 2019 11:09:12 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Pavan Nikhilesh Date: Sun, 30 Jun 2019 23:36:05 +0530 Message-ID: <20190630180609.36705-54-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 53/57] net/octeontx2: add Tx vector version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add vector version of packet transmit function. Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/net/octeontx2/otx2_tx.c | 883 +++++++++++++++++++++++++++++++- 1 file changed, 882 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 0ac5ea652..6bce55112 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -80,6 +80,859 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, return pkts; } +#if defined(RTE_ARCH_ARM64) + +#define NIX_DESCS_PER_LOOP 4 +static __rte_always_inline uint16_t +nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, const uint16_t flags) +{ + uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3; + uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3; + uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3; + uint64x2_t senddesc01_w0, senddesc23_w0; + uint64x2_t senddesc01_w1, senddesc23_w1; + uint64x2_t sgdesc01_w0, sgdesc23_w0; + uint64x2_t sgdesc01_w1, sgdesc23_w1; + struct otx2_eth_txq *txq = tx_queue; + uint64_t *lmt_addr = txq->lmt_addr; + rte_iova_t io_addr = txq->io_addr; + uint64x2_t ltypes01, ltypes23; + uint64x2_t xtmp128, ytmp128; + uint64x2_t xmask01, xmask23; + uint64x2_t mbuf01, mbuf23; + uint64x2_t cmd00, cmd01; + uint64x2_t cmd10, cmd11; + uint64x2_t cmd20, cmd21; + uint64x2_t cmd30, cmd31; + uint64_t lmt_status, i; + + pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP); + + NIX_XMIT_FC_OR_RETURN(txq, pkts); + + /* Reduce the cached count */ + txq->fc_cache_pkts -= pkts; + + /* Lets commit any changes in the packet */ + rte_cio_wmb(); + + senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]); + senddesc23_w0 = senddesc01_w0; + senddesc01_w1 = vdupq_n_u64(0); + senddesc23_w1 = senddesc01_w1; + sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]); + sgdesc23_w0 = sgdesc01_w0; + + for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) { + mbuf01 = vld1q_u64((uint64_t *)tx_pkts); + mbuf23 = vld1q_u64((uint64_t *)(tx_pkts + 2)); + + /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */ + senddesc01_w0 = vbicq_u64(senddesc01_w0, + vdupq_n_u64(0xFFFFFFFF)); + sgdesc01_w0 = vbicq_u64(sgdesc01_w0, + vdupq_n_u64(0xFFFFFFFF)); + + senddesc23_w0 = senddesc01_w0; + sgdesc23_w0 = sgdesc01_w0; + + tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP; + + /* Move mbufs to iova */ + mbuf0 = (uint64_t *)vgetq_lane_u64(mbuf01, 0); + mbuf1 = (uint64_t *)vgetq_lane_u64(mbuf01, 1); + mbuf2 = (uint64_t *)vgetq_lane_u64(mbuf23, 0); + mbuf3 = (uint64_t *)vgetq_lane_u64(mbuf23, 1); + + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mbuf, buf_iova)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mbuf, buf_iova)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mbuf, buf_iova)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mbuf, buf_iova)); + /* + * Get mbuf's, olflags, iova, pktlen, dataoff + * dataoff_iovaX.D[0] = iova, + * dataoff_iovaX.D[1](15:0) = mbuf->dataoff + * len_olflagsX.D[0] = ol_flags, + * len_olflagsX.D[1](63:32) = mbuf->pkt_len + */ + dataoff_iova0 = vld1q_u64(mbuf0); + len_olflags0 = vld1q_u64(mbuf0 + 2); + dataoff_iova1 = vld1q_u64(mbuf1); + len_olflags1 = vld1q_u64(mbuf1 + 2); + dataoff_iova2 = vld1q_u64(mbuf2); + len_olflags2 = vld1q_u64(mbuf2 + 2); + dataoff_iova3 = vld1q_u64(mbuf3); + len_olflags3 = vld1q_u64(mbuf3 + 2); + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + struct rte_mbuf *mbuf; + /* Set don't free bit if reference count > 1 */ + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - + offsetof(struct rte_mbuf, buf_iova)); + + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask01, 0); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - + offsetof(struct rte_mbuf, buf_iova)); + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask01, 1); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - + offsetof(struct rte_mbuf, buf_iova)); + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask23, 0); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - + offsetof(struct rte_mbuf, buf_iova)); + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask23, 1); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + } else { + struct rte_mbuf *mbuf; + /* Mark mempool object as "put" since + * it is freed by NIX + */ + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + RTE_SET_USED(mbuf); + } + + /* Move mbufs to point pool */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + + if (flags & + (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | + NIX_TX_OFFLOAD_L3_L4_CSUM_F)) { + /* Get tx_offload for ol2, ol3, l2, l3 lengths */ + /* + * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7) + * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7) + */ + + asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" : + [a]"+w"(senddesc01_w1) : + [in]"r"(mbuf0 + 2) : "memory"); + + asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" : + [a]"+w"(senddesc01_w1) : + [in]"r"(mbuf1 + 2) : "memory"); + + asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" : + [b]"+w"(senddesc23_w1) : + [in]"r"(mbuf2 + 2) : "memory"); + + asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" : + [b]"+w"(senddesc23_w1) : + [in]"r"(mbuf3 + 2) : "memory"); + + /* Get pool pointer alone */ + mbuf0 = (uint64_t *)*mbuf0; + mbuf1 = (uint64_t *)*mbuf1; + mbuf2 = (uint64_t *)*mbuf2; + mbuf3 = (uint64_t *)*mbuf3; + } else { + /* Get pool pointer alone */ + mbuf0 = (uint64_t *)*mbuf0; + mbuf1 = (uint64_t *)*mbuf1; + mbuf2 = (uint64_t *)*mbuf2; + mbuf3 = (uint64_t *)*mbuf3; + } + + const uint8x16_t shuf_mask2 = { + 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + }; + xtmp128 = vzip2q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip2q_u64(len_olflags2, len_olflags3); + + /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */ + const uint64x2_t and_mask0 = { + 0xFFFFFFFFFFFFFFFF, + 0x000000000000FFFF, + }; + + dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0); + dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0); + dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0); + dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0); + + /* + * Pick only 16 bits of pktlen preset at bits 63:32 + * and place them at bits 15:0. + */ + xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2); + ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2); + + /* Add pairwise to get dataoff + iova in sgdesc_w1 */ + sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1); + sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3); + + /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of + * pktlen at 15:0 position. + */ + sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128); + sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128); + senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128); + senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128); + + if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) && + !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) { + /* + * Lookup table to translate ol_flags to + * il3/il4 types. But we still use ol3/ol4 types in + * senddesc_w1 as only one header processing is enabled. + */ + const uint8x16_t tbl = { + /* [0-15] = il4type:il3type */ + 0x04, /* none (IPv6 assumed) */ + 0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */ + 0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */ + 0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */ + 0x03, /* PKT_TX_IP_CKSUM */ + 0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */ + 0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */ + 0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */ + 0x02, /* PKT_TX_IPV4 */ + 0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */ + 0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */ + 0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */ + 0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */ + 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_TCP_CKSUM + */ + 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_SCTP_CKSUM + */ + 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_UDP_CKSUM + */ + }; + + /* Extract olflags to translate to iltypes */ + xtmp128 = vzip1q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip1q_u64(len_olflags2, len_olflags3); + + /* + * E(47):L3_LEN(9):L2_LEN(7+z) + * E(47):L3_LEN(9):L2_LEN(7+z) + */ + senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1); + senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1); + + /* Move OLFLAGS bits 55:52 to 51:48 + * with zeros preprended on the byte and rest + * don't care + */ + xtmp128 = vshrq_n_u8(xtmp128, 4); + ytmp128 = vshrq_n_u8(ytmp128, 4); + /* + * E(48):L3_LEN(8):L2_LEN(z+7) + * E(48):L3_LEN(8):L2_LEN(z+7) + */ + const int8x16_t tshft3 = { + -1, 0, 8, 8, 8, 8, 8, 8, + -1, 0, 8, 8, 8, 8, 8, 8, + }; + + senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3); + senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3); + + /* Do the lookup */ + ltypes01 = vqtbl1q_u8(tbl, xtmp128); + ltypes23 = vqtbl1q_u8(tbl, ytmp128); + + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + + /* Pick only relevant fields i.e Bit 48:55 of iltype + * and place it in ol3/ol4type of senddesc_w1 + */ + const uint8x16_t shuf_mask0 = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF, + }; + + ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0); + ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0); + + /* Prepare ol4ptr, ol3ptr from ol3len, ol2len. + * a [E(32):E(16):OL3(8):OL2(8)] + * a = a + (a << 8) + * a [E(32):E(16):(OL3+OL2):OL2] + * => E(32):E(16)::OL4PTR(8):OL3PTR(8) + */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u16(senddesc01_w1, 8)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u16(senddesc23_w1, 8)); + + /* Create first half of 4W cmd for 4 mbufs (sgdesc) */ + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + /* Move ltypes to senddesc*_w1 */ + senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01); + senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23); + + /* Create first half of 4W cmd for 4 mbufs (sendhdr) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + + } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) && + (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) { + /* + * Lookup table to translate ol_flags to + * ol3/ol4 types. + */ + + const uint8x16_t tbl = { + /* [0-15] = ol4type:ol3type */ + 0x00, /* none */ + 0x03, /* OUTER_IP_CKSUM */ + 0x02, /* OUTER_IPV4 */ + 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */ + 0x04, /* OUTER_IPV6 */ + 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM */ + 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */ + 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */ + 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 | + * OUTER_IP_CKSUM + */ + 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 | OUTER_IP_CKSUM + */ + }; + + /* Extract olflags to translate to iltypes */ + xtmp128 = vzip1q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip1q_u64(len_olflags2, len_olflags3); + + /* + * E(47):OL3_LEN(9):OL2_LEN(7+z) + * E(47):OL3_LEN(9):OL2_LEN(7+z) + */ + const uint8x16_t shuf_mask5 = { + 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + }; + senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5); + senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5); + + /* Extract outer ol flags only */ + const uint64x2_t o_cksum_mask = { + 0x1C00020000000000, + 0x1C00020000000000, + }; + + xtmp128 = vandq_u64(xtmp128, o_cksum_mask); + ytmp128 = vandq_u64(ytmp128, o_cksum_mask); + + /* Extract OUTER_UDP_CKSUM bit 41 and + * move it to bit 61 + */ + + xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20); + ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20); + + /* Shift oltype by 2 to start nibble from BIT(56) + * instead of BIT(58) + */ + xtmp128 = vshrq_n_u8(xtmp128, 2); + ytmp128 = vshrq_n_u8(ytmp128, 2); + /* + * E(48):L3_LEN(8):L2_LEN(z+7) + * E(48):L3_LEN(8):L2_LEN(z+7) + */ + const int8x16_t tshft3 = { + -1, 0, 8, 8, 8, 8, 8, 8, + -1, 0, 8, 8, 8, 8, 8, 8, + }; + + senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3); + senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3); + + /* Do the lookup */ + ltypes01 = vqtbl1q_u8(tbl, xtmp128); + ltypes23 = vqtbl1q_u8(tbl, ytmp128); + + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + + /* Pick only relevant fields i.e Bit 56:63 of oltype + * and place it in ol3/ol4type of senddesc_w1 + */ + const uint8x16_t shuf_mask0 = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF, + }; + + ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0); + ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0); + + /* Prepare ol4ptr, ol3ptr from ol3len, ol2len. + * a [E(32):E(16):OL3(8):OL2(8)] + * a = a + (a << 8) + * a [E(32):E(16):(OL3+OL2):OL2] + * => E(32):E(16)::OL4PTR(8):OL3PTR(8) + */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u16(senddesc01_w1, 8)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u16(senddesc23_w1, 8)); + + /* Create second half of 4W cmd for 4 mbufs (sgdesc) */ + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + /* Move ltypes to senddesc*_w1 */ + senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01); + senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23); + + /* Create first half of 4W cmd for 4 mbufs (sendhdr) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + + } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) && + (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) { + /* Lookup table to translate ol_flags to + * ol4type, ol3type, il4type, il3type of senddesc_w1 + */ + const uint8x16x2_t tbl = { + { + { + /* [0-15] = il4type:il3type */ + 0x04, /* none (IPv6) */ + 0x14, /* PKT_TX_TCP_CKSUM (IPv6) */ + 0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */ + 0x34, /* PKT_TX_UDP_CKSUM (IPv6) */ + 0x03, /* PKT_TX_IP_CKSUM */ + 0x13, /* PKT_TX_IP_CKSUM | + * PKT_TX_TCP_CKSUM + */ + 0x23, /* PKT_TX_IP_CKSUM | + * PKT_TX_SCTP_CKSUM + */ + 0x33, /* PKT_TX_IP_CKSUM | + * PKT_TX_UDP_CKSUM + */ + 0x02, /* PKT_TX_IPV4 */ + 0x12, /* PKT_TX_IPV4 | + * PKT_TX_TCP_CKSUM + */ + 0x22, /* PKT_TX_IPV4 | + * PKT_TX_SCTP_CKSUM + */ + 0x32, /* PKT_TX_IPV4 | + * PKT_TX_UDP_CKSUM + */ + 0x03, /* PKT_TX_IPV4 | + * PKT_TX_IP_CKSUM + */ + 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_TCP_CKSUM + */ + 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_SCTP_CKSUM + */ + 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_UDP_CKSUM + */ + }, + + { + /* [16-31] = ol4type:ol3type */ + 0x00, /* none */ + 0x03, /* OUTER_IP_CKSUM */ + 0x02, /* OUTER_IPV4 */ + 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */ + 0x04, /* OUTER_IPV6 */ + 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM */ + 0x33, /* OUTER_UDP_CKSUM | + * OUTER_IP_CKSUM + */ + 0x32, /* OUTER_UDP_CKSUM | + * OUTER_IPV4 + */ + 0x33, /* OUTER_UDP_CKSUM | + * OUTER_IPV4 | OUTER_IP_CKSUM + */ + 0x34, /* OUTER_UDP_CKSUM | + * OUTER_IPV6 + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 | OUTER_IP_CKSUM + */ + }, + } + }; + + /* Extract olflags to translate to oltype & iltype */ + xtmp128 = vzip1q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip1q_u64(len_olflags2, len_olflags3); + + /* + * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z) + * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z) + */ + const uint32x4_t tshft_4 = { + 1, 0, + 1, 0, + }; + senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4); + senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4); + + /* + * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z) + * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z) + */ + const uint8x16_t shuf_mask5 = { + 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF, + 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF, + }; + senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5); + senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5); + + /* Extract outer and inner header ol_flags */ + const uint64x2_t oi_cksum_mask = { + 0x1CF0020000000000, + 0x1CF0020000000000, + }; + + xtmp128 = vandq_u64(xtmp128, oi_cksum_mask); + ytmp128 = vandq_u64(ytmp128, oi_cksum_mask); + + /* Extract OUTER_UDP_CKSUM bit 41 and + * move it to bit 61 + */ + + xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20); + ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20); + + /* Shift right oltype by 2 and iltype by 4 + * to start oltype nibble from BIT(58) + * instead of BIT(56) and iltype nibble from BIT(48) + * instead of BIT(52). + */ + const int8x16_t tshft5 = { + 8, 8, 8, 8, 8, 8, -4, -2, + 8, 8, 8, 8, 8, 8, -4, -2, + }; + + xtmp128 = vshlq_u8(xtmp128, tshft5); + ytmp128 = vshlq_u8(ytmp128, tshft5); + /* + * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8) + * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8) + */ + const int8x16_t tshft3 = { + -1, 0, -1, 0, 0, 0, 0, 0, + -1, 0, -1, 0, 0, 0, 0, 0, + }; + + senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3); + senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3); + + /* Mark Bit(4) of oltype */ + const uint64x2_t oi_cksum_mask2 = { + 0x1000000000000000, + 0x1000000000000000, + }; + + xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2); + ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2); + + /* Do the lookup */ + ltypes01 = vqtbl2q_u8(tbl, xtmp128); + ltypes23 = vqtbl2q_u8(tbl, ytmp128); + + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + + /* Pick only relevant fields i.e Bit 48:55 of iltype and + * Bit 56:63 of oltype and place it in corresponding + * place in senddesc_w1. + */ + const uint8x16_t shuf_mask0 = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF, + }; + + ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0); + ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0); + + /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from + * l3len, l2len, ol3len, ol2len. + * a [E(32):L3(8):L2(8):OL3(8):OL2(8)] + * a = a + (a << 8) + * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2] + * a = a + (a << 16) + * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2] + * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8) + */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u32(senddesc01_w1, 8)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u32(senddesc23_w1, 8)); + + /* Create second half of 4W cmd for 4 mbufs (sgdesc) */ + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + + /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u32(senddesc01_w1, 16)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u32(senddesc23_w1, 16)); + + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + /* Move ltypes to senddesc*_w1 */ + senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01); + senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23); + + /* Create first half of 4W cmd for 4 mbufs (sendhdr) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + } else { + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + + /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + } + + do { + vst1q_u64(lmt_addr, cmd00); + vst1q_u64(lmt_addr + 2, cmd01); + vst1q_u64(lmt_addr + 4, cmd10); + vst1q_u64(lmt_addr + 6, cmd11); + vst1q_u64(lmt_addr + 8, cmd20); + vst1q_u64(lmt_addr + 10, cmd21); + vst1q_u64(lmt_addr + 12, cmd30); + vst1q_u64(lmt_addr + 14, cmd31); + lmt_status = otx2_lmt_submit(io_addr); + + } while (lmt_status == 0); + } + + return pkts; +} + +#else +static __rte_always_inline uint16_t +nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, const uint16_t flags) +{ + RTE_SET_USED(tx_queue); + RTE_SET_USED(tx_pkts); + RTE_SET_USED(pkts); + RTE_SET_USED(flags); + return 0; +} +#endif + #define T(name, f4, f3, f2, f1, f0, sz, flags) \ static uint16_t __rte_noinline __hot \ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ @@ -107,6 +960,21 @@ otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \ NIX_TX_FASTPATH_MODES #undef T +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \ + struct rte_mbuf **tx_pkts, uint16_t pkts) \ +{ \ + /* VLAN and TSTMP is not supported by vec */ \ + if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \ + (flags) & NIX_TX_OFFLOAD_TSTAMP_F) \ + return 0; \ + return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, (flags)); \ +} + +NIX_TX_FASTPATH_MODES +#undef T + static inline void pick_tx_func(struct rte_eth_dev *eth_dev, const eth_tx_burst_t tx_burst[2][2][2][2][2]) @@ -143,7 +1011,20 @@ NIX_TX_FASTPATH_MODES #undef T }; - pick_tx_func(eth_dev, nix_eth_tx_burst); + const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name, + +NIX_TX_FASTPATH_MODES +#undef T + }; + + if (dev->scalar_ena || + (dev->tx_offload_flags & + (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F))) + pick_tx_func(eth_dev, nix_eth_tx_burst); + else + pick_tx_func(eth_dev, nix_eth_tx_vec_burst); if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) pick_tx_func(eth_dev, nix_eth_tx_burst_mseg); From patchwork Sun Jun 30 18:06:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55724 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 189501BC0E; Sun, 30 Jun 2019 20:12:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E85261B998 for ; Sun, 30 Jun 2019 20:09:19 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI6fuE028350; Sun, 30 Jun 2019 11:09:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=N6ZUUwJZPZZjexFyRChY6XRJn3b0JPRWodGdzqanxo8=; b=jFLn1Yw/QI2icSALhUZO3fr095QLXaMIMeJL5ORp1K5gjUyznw/VGntTCQltXabKKnac lzHX6hQSXEG9wrXJXGI6U1Pz+XJUVedQwwaQAXKEnlhxwPedbE/dPQfEjVNVDVUF9P+Y 3KKHBarshMkkX5E8zPjZyA+qck/NBwHWzKcvgJHZFW6GCL3ouvKxHICDHYm4ICMMMjcA 2vr6QlaYFeTyCZpeq9xk0R0nBl8mm3ksXtkT/C0oB0mPyDDa28QEIVsOIF3kgsw+eOr/ hZ2AA+jERqniNi0+m2gPr7pRLdavNBzj+AtaBKnoAxSymBtBN4G/kRgSxXIq8HCSzM5t KA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gpt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:09:18 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:17 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id AF5AE3F703F; Sun, 30 Jun 2019 11:09:15 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "John McNamara" , Marko Kovacevic CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:36:06 +0530 Message-ID: <20190630180609.36705-55-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 54/57] net/octeontx2: add device start operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add device start operation and update the correct function pointers for Rx and Tx burst functions. This patch also update the octeontx2 NIC specific documentation. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru Signed-off-by: Jerin Jacob --- doc/guides/nics/octeontx2.rst | 91 ++++++++++++ drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++ drivers/net/octeontx2/otx2_flow.c | 4 +- drivers/net/octeontx2/otx2_flow_parse.c | 4 +- drivers/net/octeontx2/otx2_ptp.c | 8 ++ drivers/net/octeontx2/otx2_vlan.c | 1 + 6 files changed, 286 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 90ca4e2d2..d4a458262 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -34,6 +34,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Vector Poll mode driver - Debug utilities - Context dump and error interrupt support - IEEE1588 timestamping +- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection Prerequisites ------------- @@ -49,6 +50,63 @@ The following options may be modified in the ``config`` file. Toggle compilation of the ``librte_pmd_octeontx2`` driver. +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC ` +for details. + +To compile the OCTEON TX2 PMD for Linux arm64 gcc, +use arm64-octeontx2-linux-gcc as target. + +#. Running testpmd: + + Follow instructions available in the document + :ref:`compiling and testing a PMD for a NIC ` + to run testpmd. + + Example output: + + .. code-block:: console + + ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1 + EAL: Detected 24 lcore(s) + EAL: Detected 1 NUMA nodes + EAL: Multi-process socket /var/run/dpdk/rte/mp_socket + EAL: No available hugepages reported in hugepages-2048kB + EAL: Probing VFIO support... + EAL: VFIO support initialized + EAL: PCI device 0002:02:00.0 on NUMA socket 0 + EAL: probe driver: 177d:a063 net_octeontx2 + EAL: using IOMMU type 1 (Type 1) + testpmd: create a new mbuf pool : n=267456, size=2176, socket=0 + testpmd: preferred mempool ops selected: octeontx2_npa + Configuring Port 0 (socket 0) + PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex + + Port 0: link state change event + Port 0: 36:10:66:88:7A:57 + Checking link statuses... + Done + No commandline core given, start packet forwarding + io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native + Logical Core 9 (socket 0) forwards packets on 1 streams: + RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 + + io packet forwarding packets/burst=32 + nb forwarding cores=1 - nb forwarding ports=1 + port 0: RX queue number: 1 Tx queue number: 1 + Rx offloads=0x0 Tx offloads=0x10000 + RX queue: 0 + RX desc=512 - RX free threshold=0 + RX threshold registers: pthresh=0 hthresh=0 wthresh=0 + RX Offloads=0x0 + TX queue: 0 + TX desc=512 - TX free threshold=0 + TX threshold registers: pthresh=0 hthresh=0 wthresh=0 + TX offloads=0x10000 - TX RS bit threshold=0 + Press enter to exit + Runtime Config Options ---------------------- @@ -116,6 +174,39 @@ Runtime Config Options parameters to all the PCIe devices if application requires to configure on all the ethdev ports. +Limitations +----------- + +``mempool_octeontx2`` external mempool handler dependency +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager. +``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler +as it is performance wise most effective way for packet allocation and Tx buffer +recycling on OCTEON TX2 SoC platform. + +CRC striping +~~~~~~~~~~~~ + +The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by +the host interface irrespective of the offload configuration. + + +Debugging Options +----------------- + +.. _table_octeontx2_ethdev_debug_options: + +.. table:: OCTEON TX2 ethdev debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | NIX | --log-level='pmd\.net.octeontx2,8' | + +---+------------+-------------------------------------------------------+ + | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' | + +---+------------+-------------------------------------------------------+ + RTE Flow Support ---------------- diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 44753cbf5..7f33f8808 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -135,6 +135,55 @@ otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +static int +npc_rx_enable(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_lf_start_rx(mbox); + + return otx2_mbox_process(mbox); +} + +static int +npc_rx_disable(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox); + + return otx2_mbox_process(mbox); +} + +static int +nix_cgx_start_link_event(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_start_linkevents(mbox); + + return otx2_mbox_process(mbox); +} + +static int +cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + if (en) + otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox); + else + otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox); + + return otx2_mbox_process(mbox); +} + static inline void nix_rx_queue_reset(struct otx2_eth_rxq *rxq) { @@ -478,6 +527,74 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq) return NIX_MAXSQESZ_W8; } +static uint16_t +nix_rx_offload_flags(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_eth_conf *conf = &data->dev_conf; + struct rte_eth_rxmode *rxmode = &conf->rxmode; + uint16_t flags = 0; + + if (rxmode->mq_mode == ETH_MQ_RX_RSS) + flags |= NIX_RX_OFFLOAD_RSS_F; + + if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM)) + flags |= NIX_RX_OFFLOAD_CHECKSUM_F; + + if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) + flags |= NIX_RX_OFFLOAD_CHECKSUM_F; + + if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) + flags |= NIX_RX_MULTI_SEG_F; + + if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP)) + flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + + if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)) + flags |= NIX_RX_OFFLOAD_TSTAMP_F; + + return flags; +} + +static uint16_t +nix_tx_offload_flags(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t conf = dev->tx_offloads; + uint16_t flags = 0; + + /* Fastpath is dependent on these enums */ + RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52)); + RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52)); + RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52)); + + if (conf & DEV_TX_OFFLOAD_VLAN_INSERT || + conf & DEV_TX_OFFLOAD_QINQ_INSERT) + flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F; + + if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM || + conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) + flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F; + + if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM || + conf & DEV_TX_OFFLOAD_TCP_CKSUM || + conf & DEV_TX_OFFLOAD_UDP_CKSUM || + conf & DEV_TX_OFFLOAD_SCTP_CKSUM) + flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F; + + if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) + flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F; + + if (conf & DEV_TX_OFFLOAD_MULTI_SEGS) + flags |= NIX_TX_MULTI_SEG_F; + + return flags; +} + static int nix_sq_init(struct otx2_eth_txq *txq) { @@ -1092,6 +1209,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) dev->rx_offloads = rxmode->offloads; dev->tx_offloads = txmode->offloads; + dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev); + dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev); dev->rss_info.rss_grps = NIX_RSS_GRPS; nb_rxq = RTE_MAX(data->nb_rx_queues, 1); @@ -1131,6 +1250,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Configure loop back mode */ + rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode); + if (rc) { + otx2_err("Failed to configure cgx loop back mode rc=%d", rc); + goto free_nix_lf; + } + rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true); if (rc) { otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc); @@ -1280,6 +1406,59 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) return rc; } +static int +otx2_nix_dev_start(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, i; + + /* Start rx queues */ + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rc = otx2_nix_rx_queue_start(eth_dev, i); + if (rc) + return rc; + } + + /* Start tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + rc = otx2_nix_tx_queue_start(eth_dev, i); + if (rc) + return rc; + } + + rc = otx2_nix_update_flow_ctrl_mode(eth_dev); + if (rc) { + otx2_err("Failed to update flow ctrl mode %d", rc); + return rc; + } + + rc = npc_rx_enable(dev); + if (rc) { + otx2_err("Failed to enable NPC rx %d", rc); + return rc; + } + + otx2_nix_toggle_flag_link_cfg(dev, true); + + rc = nix_cgx_start_link_event(dev); + if (rc) { + otx2_err("Failed to start cgx link event %d", rc); + goto rx_disable; + } + + otx2_nix_toggle_flag_link_cfg(dev, false); + otx2_eth_set_tx_function(eth_dev); + otx2_eth_set_rx_function(eth_dev); + + return 0; + +rx_disable: + npc_rx_disable(dev); + otx2_nix_toggle_flag_link_cfg(dev, false); + return rc; +} + + /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, @@ -1289,6 +1468,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_release = otx2_nix_tx_queue_release, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, + .dev_start = otx2_nix_dev_start, .tx_queue_start = otx2_nix_tx_queue_start, .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 3ddecfb23..982100df4 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -528,8 +528,10 @@ otx2_flow_destroy(struct rte_eth_dev *dev, return -EINVAL; /* Clear mark offload flag if there are no more mark actions */ - if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) + if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) { hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F; + otx2_eth_set_rx_function(dev); + } } rc = flow_free_rss_action(dev, flow); diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c index 4cf5ce17e..375d00620 100644 --- a/drivers/net/octeontx2/otx2_flow_parse.c +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -926,9 +926,11 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev, if (mark) flow->npc_action |= (uint64_t)mark << 40; - if (rte_atomic32_read(&npc->mark_actions) == 1) + if (rte_atomic32_read(&npc->mark_actions) == 1) { hw->rx_offload_flags |= NIX_RX_OFFLOAD_MARK_UPDATE_F; + otx2_eth_set_rx_function(dev); + } set_pf_func: /* Ideally AF must ensure that correct pf_func is set */ diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c index 5291da241..0186c629a 100644 --- a/drivers/net/octeontx2/otx2_ptp.c +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -118,6 +118,10 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev) struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; otx2_nix_form_default_desc(txq); } + + /* Setting up the function pointers as per new offload flags */ + otx2_eth_set_rx_function(eth_dev); + otx2_eth_set_tx_function(eth_dev); } return rc; } @@ -147,6 +151,10 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev) struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; otx2_nix_form_default_desc(txq); } + + /* Setting up the function pointers as per new offload flags */ + otx2_eth_set_rx_function(eth_dev); + otx2_eth_set_tx_function(eth_dev); } return rc; } diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index dc0f4e032..189c45174 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -760,6 +760,7 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) DEV_RX_OFFLOAD_QINQ_STRIP)) { dev->rx_offloads |= offloads; dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + otx2_eth_set_rx_function(eth_dev); } done: From patchwork Sun Jun 30 18:06:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55725 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 786371BC27; Sun, 30 Jun 2019 20:12:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id AF1231B998 for ; Sun, 30 Jun 2019 20:09:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI73OQ028454 for ; Sun, 30 Jun 2019 11:09:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nhm4EIOq7uWb2P5tTX6N3G5IY6fTTQXMr3GQFDZfo2k=; b=U1XHS3WxowJCdGTpqfwlq9VUq6B4ySQXHQORUh6xi1e0E26DP4bwOHOjI/dBdH1bPQO0 uHVg1eFiZ7QilrZqU052O7w0XKEjgpjEeOy5iLIWPV4+ihEeBX3qOOs7hwDcyD+07/mW zNzSq6sn4MjtJLyE7DFLU3vXMOvVN8srBJDVAk0/Z9cGDQC502mfc0q9Z34pGsILhknQ 5M81AdI5qUj3iGWix1PRXHhM94CIFa1aHyZ4e806tAk6LNslRQB2R55rn9SPSEYwBTG5 ji/W4RIcz/X9LrnXypNaqpttjTVoyCRansjqJwkeGrW7rijx+7xorEShu6rUMKfczqFI tg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2te5bn4gq7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 30 Jun 2019 11:09:21 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:20 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id F01263F703F; Sun, 30 Jun 2019 11:09:18 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Vamsi Attunuru Date: Sun, 30 Jun 2019 23:36:07 +0530 Message-ID: <20190630180609.36705-56-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 55/57] net/octeontx2: add device stop and close operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add device stop, close and reset operations. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/otx2_ethdev.c | 75 +++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 7f33f8808..e23bed603 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -184,6 +184,19 @@ cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en) return otx2_mbox_process(mbox); } +static int +nix_cgx_stop_link_event(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox); + + return otx2_mbox_process(mbox); +} + static inline void nix_rx_queue_reset(struct otx2_eth_rxq *rxq) { @@ -1189,6 +1202,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) if (dev->configured == 1) { otx2_nix_rxchan_bpid_cfg(eth_dev, false); otx2_nix_vlan_fini(eth_dev); + otx2_flow_free_all_resources(dev); oxt2_nix_unregister_queue_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); rc = nix_store_queue_cfg_and_then_release(eth_dev); @@ -1406,6 +1420,37 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) return rc; } +static void +otx2_nix_dev_stop(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_mbuf *rx_pkts[32]; + struct otx2_eth_rxq *rxq; + int count, i, j, rc; + + nix_cgx_stop_link_event(dev); + npc_rx_disable(dev); + + /* Stop rx queues and free up pkts pending */ + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rc = otx2_nix_rx_queue_stop(eth_dev, i); + if (rc) + continue; + + rxq = eth_dev->data->rx_queues[i]; + count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32); + while (count) { + for (j = 0; j < count; j++) + rte_pktmbuf_free(rx_pkts[j]); + count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32); + } + } + + /* Stop tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + otx2_nix_tx_queue_stop(eth_dev, i); +} + static int otx2_nix_dev_start(struct rte_eth_dev *eth_dev) { @@ -1458,6 +1503,8 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev) return rc; } +static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev); +static void otx2_nix_dev_close(struct rte_eth_dev *eth_dev); /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { @@ -1469,11 +1516,14 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, .dev_start = otx2_nix_dev_start, + .dev_stop = otx2_nix_dev_stop, + .dev_close = otx2_nix_dev_close, .tx_queue_start = otx2_nix_tx_queue_start, .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, .rx_queue_stop = otx2_nix_rx_queue_stop, .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get, + .dev_reset = otx2_nix_dev_reset, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, @@ -1725,9 +1775,14 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Clear the flag since we are closing down */ + dev->configured = 0; + /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + npc_rx_disable(dev); + /* Disable vlan offloads */ otx2_nix_vlan_fini(eth_dev); @@ -1738,6 +1793,8 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (otx2_ethdev_is_ptp_en(dev)) otx2_nix_timesync_disable(eth_dev); + nix_cgx_stop_link_event(dev); + /* Free up SQs */ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); @@ -1793,6 +1850,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) return 0; } +static void +otx2_nix_dev_close(struct rte_eth_dev *eth_dev) +{ + otx2_eth_dev_uninit(eth_dev, true); +} + +static int +otx2_nix_dev_reset(struct rte_eth_dev *eth_dev) +{ + int rc; + + rc = otx2_eth_dev_uninit(eth_dev, false); + if (rc) + return rc; + + return otx2_eth_dev_init(eth_dev); +} + static int nix_remove(struct rte_pci_device *pci_dev) { From patchwork Sun Jun 30 18:06:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55726 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AB29D1BC70; Sun, 30 Jun 2019 20:12:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D0AB01B9B5 for ; Sun, 30 Jun 2019 20:09:26 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI5KjY015698; Sun, 30 Jun 2019 11:09:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=M7RClfI9fZOXxlxUi04JGK9nTh/juvfwOaGs216yBfQ=; b=yudZupF6DlhJc+DhyYDWkdlV40NgsegQoKNcgZRdAW89tx4H/rPbKvJMHOPcZECa4ySl +PkMdx1wAg5oDkAgXy5/TDLCeD++i5tuepzjeWlkND4gPKcXRvyINHePwjn5bVhhFyQf qTIiSEF4COXykh22A7HjC4+zj79NI1dNjgiVCuTQv8q/4DN6Iqho3QX7aD2Lw6XqmlNu drcglRNVGqe69E8ABNyiQQw+d4vlPL4nuEKluzcTd0aMsk+ok3QLMgOz36p7VlbMe7t2 O+2+BmqgmlG9bI3mNTwDSTvdnxXRDpcTinukHxFFZDjyJfjiBwyEngq4vd7oOaKUUWdX 4g== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yg5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:09:26 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:24 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:24 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id F3B503F703F; Sun, 30 Jun 2019 11:09:21 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Vamsi Attunuru , Sunil Kumar Kori Date: Sun, 30 Jun 2019 23:36:08 +0530 Message-ID: <20190630180609.36705-57-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 56/57] net/octeontx2: add MTU set operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add MTU set operation and MTU update feature. Signed-off-by: Vamsi Attunuru Signed-off-by: Sunil Kumar Kori --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 7 ++ drivers/net/octeontx2/otx2_ethdev.h | 4 + drivers/net/octeontx2/otx2_ethdev_ops.c | 86 ++++++++++++++++++++++ 6 files changed, 100 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 1856d9924..be10dc0c8 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -15,6 +15,7 @@ Runtime Tx queue setup = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y +MTU update = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 053fca288..df8180f83 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -15,6 +15,7 @@ Runtime Tx queue setup = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y +MTU update = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index d4a458262..517e9e641 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -30,6 +30,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Port hardware statistics - Link state information - Link flow control +- MTU update - Scatter-Gather IO support - Vector Poll mode driver - Debug utilities - Context dump and error interrupt support diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index e23bed603..170593e95 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1457,6 +1457,12 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev) struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int rc, i; + if (eth_dev->data->nb_rx_queues != 0) { + rc = otx2_nix_recalc_mtu(eth_dev); + if (rc) + return rc; + } + /* Start rx queues */ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { rc = otx2_nix_rx_queue_start(eth_dev, i); @@ -1527,6 +1533,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .mtu_set = otx2_nix_mtu_set, .mac_addr_add = otx2_nix_mac_addr_add, .mac_addr_remove = otx2_nix_mac_addr_del, .mac_addr_set = otx2_nix_mac_addr_set, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index f39fdfa1f..3703acc69 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -371,6 +371,10 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx); int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx); uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id); +/* MTU */ +int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu); +int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev); + /* Link */ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 6a3048336..5a16a3c04 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -6,6 +6,92 @@ #include "otx2_ethdev.h" +int +otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) +{ + uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct otx2_mbox *mbox = dev->mbox; + struct nix_frs_cfg *req; + int rc; + + /* Check if MTU is within the allowed range */ + if (frame_size < NIX_MIN_FRS || frame_size > NIX_MAX_FRS) + return -EINVAL; + + buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; + + /* Refuse MTU that requires the support of scattered packets + * when this feature has not been enabled before. + */ + if (data->dev_started && frame_size > buffsz && + !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) + return -EINVAL; + + /* Check * >= max_frame */ + if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) && + (frame_size > buffsz * NIX_RX_NB_SEG_MAX)) + return -EINVAL; + + req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox); + req->update_smq = true; + /* FRS HW config should exclude FCS but include NPC VTAG insert size */ + req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + /* Now just update Rx MAXLEN */ + req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox); + req->maxlen = frame_size - RTE_ETHER_CRC_LEN; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + if (frame_size > RTE_ETHER_MAX_LEN) + dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + else + dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; + + /* Update max_rx_pkt_len */ + data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + + return rc; +} + +int +otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_pktmbuf_pool_private *mbp_priv; + struct otx2_eth_rxq *rxq; + uint32_t buffsz; + uint16_t mtu; + int rc; + + /* Get rx buffer size */ + rxq = data->rx_queues[0]; + mbp_priv = rte_mempool_get_priv(rxq->pool); + buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; + + /* Setup scatter mode if needed by jumbo */ + if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) + dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER; + + /* Setup MTU based on max_rx_pkt_len */ + mtu = data->dev_conf.rxmode.max_rx_pkt_len - NIX_L2_OVERHEAD; + + rc = otx2_nix_mtu_set(eth_dev, mtu); + if (rc) + otx2_err("Failed to set default MTU size %d", rc); + + return rc; +} + static void nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en) { From patchwork Sun Jun 30 18:06:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 55727 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC8771BC76; Sun, 30 Jun 2019 20:12:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 560A71B9B5 for ; Sun, 30 Jun 2019 20:09:30 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5UI54DB015680; Sun, 30 Jun 2019 11:09:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=lVAljzD9O8pWtaEa4PCLPyX9g7IDnNFAI8JghDAo6+A=; b=RGxnpRY8hgVtTEnGqMBbK0lX3CGy1FV9Re1pyG2x0cWnt2BSUqqvD+zOia2ZVUAXmhvA XGak4shxID2KuaIfSTcBUmgoU2lcZjkvnlR5oBY3nKhbKCUxmqdMzMIH3RQkJYaEyyC1 CWgXGBfPxe5IA7pQM/1abK9mXHxQ0A2LydHfrocfreJnLuRMCaeQqBEh8PH4vlMvi29X iLZvWI4wke8f3MaoPgYO+CEuT3co5pWrU8VXQlvfib0u4DEDuIG6H1FljItcLrT72FFP NsF/MhTxpE5SHfCy0/gqT1+S4ghsQ9GjRKWOwtKVbqtiDS+XdeODtBj0M1kPW0kKgrbI Kw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2te7gm3yg7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 30 Jun 2019 11:09:29 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 30 Jun 2019 11:09:27 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 30 Jun 2019 11:09:27 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C6CCE3F703F; Sun, 30 Jun 2019 11:09:25 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Harman Kalra Date: Sun, 30 Jun 2019 23:36:09 +0530 Message-ID: <20190630180609.36705-58-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190630180609.36705-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> <20190630180609.36705-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-30_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 57/57] net/octeontx2: add Rx interrupts support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Harman Kalra This patch implements rx interrupts feature required for power saving. These interrupts can be enabled/disabled on demand. Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + doc/guides/nics/octeontx2.rst | 1 + drivers/net/octeontx2/otx2_ethdev.c | 31 ++++++ drivers/net/octeontx2/otx2_ethdev.h | 16 +++ drivers/net/octeontx2/otx2_ethdev_irq.c | 125 ++++++++++++++++++++++ 6 files changed, 175 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index be10dc0c8..66952328b 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -5,6 +5,7 @@ ; [Features] Speed capabilities = Y +Rx interrupt = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index bef451d01..16799309b 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -7,6 +7,7 @@ Speed capabilities = Y Lock-free Tx queue = Y Multiprocess aware = Y +Rx interrupt = Y Link status = Y Link status event = Y Runtime Rx queue setup = Y diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst index 517e9e641..dbd376665 100644 --- a/doc/guides/nics/octeontx2.rst +++ b/doc/guides/nics/octeontx2.rst @@ -36,6 +36,7 @@ Features of the OCTEON TX2 Ethdev PMD are: - Debug utilities - Context dump and error interrupt support - IEEE1588 timestamping - HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection +- Support Rx interrupt Prerequisites ------------- diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 170593e95..7f50a4c0e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -277,6 +277,8 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, /* Many to one reduction */ aq->cq.qint_idx = qid % dev->qints; + /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */ + aq->cq.cint_idx = qid; if (otx2_ethdev_fixup_is_limit_cq_full(dev)) { uint16_t min_rx_drop; @@ -1204,6 +1206,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) otx2_nix_vlan_fini(eth_dev); otx2_flow_free_all_resources(dev); oxt2_nix_unregister_queue_irqs(eth_dev); + if (eth_dev->data->dev_conf.intr_conf.rxq) + oxt2_nix_unregister_cq_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); rc = nix_store_queue_cfg_and_then_release(eth_dev); if (rc) @@ -1264,6 +1268,27 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Register cq IRQs */ + if (eth_dev->data->dev_conf.intr_conf.rxq) { + if (eth_dev->data->nb_rx_queues > dev->cints) { + otx2_err("Rx interrupt cannot be enabled, rxq > %d", + dev->cints); + goto free_nix_lf; + } + /* Rx interrupt feature cannot work with vector mode because, + * vector mode doesn't process packets unless min 4 pkts are + * received, while cq interrupts are generated even for 1 pkt + * in the CQ. + */ + dev->scalar_ena = true; + + rc = oxt2_nix_register_cq_irqs(eth_dev); + if (rc) { + otx2_err("Failed to register CQ interrupts rc=%d", rc); + goto free_nix_lf; + } + } + /* Configure loop back mode */ rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode); if (rc) { @@ -1576,6 +1601,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set, .vlan_tpid_set = otx2_nix_vlan_tpid_set, .vlan_pvid_set = otx2_nix_vlan_pvid_set, + .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable, + .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable, }; static inline int @@ -1824,6 +1851,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); + /* Unregister cq irqs */ + if (eth_dev->data->dev_conf.intr_conf.rxq) + oxt2_nix_unregister_cq_irqs(eth_dev); + rc = nix_lf_free(dev); if (rc) otx2_err("Failed to free nix lf, rc=%d", rc); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 3703acc69..f6905db83 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -102,6 +102,13 @@ #define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR) #define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR) +#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when + * NIX_LF_CINTX_CNT[QCOUNT] + * crosses this value + */ +#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */ +#define CQ_TIMER_THRESH_MAX 255 + #define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\ ETH_RSS_TCP | ETH_RSS_SCTP | \ ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD) @@ -248,6 +255,7 @@ struct otx2_eth_dev { uint16_t qints; uint8_t configured; uint8_t configured_qints; + uint8_t configured_cints; uint8_t configured_nb_rx_qs; uint8_t configured_nb_tx_qs; uint16_t nix_msixoff; @@ -262,6 +270,7 @@ struct otx2_eth_dev { uint64_t rx_offload_capa; uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; + struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT]; uint16_t txschq[NIX_TXSCH_LVL_CNT]; uint16_t txschq_contig[NIX_TXSCH_LVL_CNT]; uint16_t txschq_index[NIX_TXSCH_LVL_CNT]; @@ -384,8 +393,15 @@ void otx2_eth_dev_link_status_update(struct otx2_dev *dev, /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); +int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); +void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev); + +int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id); +int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id); /* Debug */ int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data); diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 066aca7a5..9006e5c8b 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -5,6 +5,7 @@ #include #include +#include #include "otx2_ethdev.h" @@ -171,6 +172,18 @@ nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off) (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff)); } +static void +nix_lf_cq_irq(void *param) +{ + struct otx2_qint *cint = (struct otx2_qint *)param; + struct rte_eth_dev *eth_dev = cint->eth_dev; + struct otx2_eth_dev *dev; + + dev = otx2_eth_pmd_priv(eth_dev); + /* Clear interrupt */ + otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx)); +} + static void nix_lf_q_irq(void *param) { @@ -315,6 +328,92 @@ oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev) } } +int +oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint8_t rc = 0, vec, q; + + dev->configured_cints = RTE_MIN(dev->cints, + eth_dev->data->nb_rx_queues); + + for (q = 0; q < dev->configured_cints; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q; + + /* Clear CINT CNT */ + otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q)); + + /* Clear interrupt */ + otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q)); + + dev->cints_mem[q].eth_dev = eth_dev; + dev->cints_mem[q].qintx = q; + + /* Sync cints_mem update */ + rte_smp_wmb(); + + /* Register queue irq vector */ + rc = otx2_register_irq(handle, nix_lf_cq_irq, + &dev->cints_mem[q], vec); + if (rc) { + otx2_err("Fail to register CQ irq, rc=%d", rc); + return rc; + } + + if (!handle->intr_vec) { + handle->intr_vec = rte_zmalloc("intr_vec", + dev->configured_cints * + sizeof(int), 0); + if (!handle->intr_vec) { + otx2_err("Failed to allocate %d rx intr_vec", + dev->configured_cints); + return -ENOMEM; + } + } + /* VFIO vector zero is resereved for misc interrupt so + * doing required adjustment. (b13bfab4cd) + */ + handle->intr_vec[q] = RTE_INTR_VEC_RXTX_OFFSET + vec; + + /* Configure CQE interrupt coalescing parameters */ + otx2_write64(((CQ_CQE_THRESH_DEFAULT) | + (CQ_CQE_THRESH_DEFAULT << 32) | + (CQ_TIMER_THRESH_DEFAULT << 48)), + dev->base + NIX_LF_CINTX_WAIT((q))); + + /* Keeping the CQ interrupt disabled as the rx interrupt + * feature needs to be enabled/disabled on demand. + */ + } + + return rc; +} + +void +oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q; + + for (q = 0; q < dev->configured_cints; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q; + + /* Clear CINT CNT */ + otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q)); + + /* Clear interrupt */ + otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q)); + + /* Unregister queue irq vector */ + otx2_unregister_irq(handle, nix_lf_cq_irq, + &dev->cints_mem[q], vec); + } +} + int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev) { @@ -341,3 +440,29 @@ otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev) nix_lf_unregister_err_irq(eth_dev); nix_lf_unregister_ras_irq(eth_dev); } + +int +otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* Enable CINT interrupt */ + otx2_write64(BIT_ULL(0), dev->base + + NIX_LF_CINTX_ENA_W1S(rx_queue_id)); + + return 0; +} + +int +otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* Clear and disable CINT interrupt */ + otx2_write64(BIT_ULL(0), dev->base + + NIX_LF_CINTX_ENA_W1C(rx_queue_id)); + + return 0; +}