From patchwork Fri Jun 28 18:23:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55606 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3701E4CC0; Fri, 28 Jun 2019 20:24:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 23F6C4C8B for ; Fri, 28 Jun 2019 20:24:01 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhtg010886; Fri, 28 Jun 2019 11:24:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=SLTxBUBKs1lklErGj8+5jm4GybOgX3rp1CSD6lfaRMg=; b=kc3k/Dwmy84UuFovENnVTPhSr6PpzytG1tEjUF5TZjGVRdJOldmIjEQiHFr2BmDh1UHP 0PKPF03Rosm8tzLyaxV3r5jz6SpzSf8KK8RWLSj3de0UUJ67c39UqsHfxKnFYOTlKxDP T+PtKpdsmRCRA+eUL5QyI6FJ3K/kCzMQZPvm8XgITKje4mjNYyBAnXoMVRddLaOpcLeM mRCuGvcRsVFDEt8MHjAAsEqsA7eiDFzpp8GXEwRrKLKPq60UMN5K3HSJEgzpH5I1GqXq qHgcsTXIMwGG4uVsbcRmauofXLyj0AsEjM/z9CdXZoYBeSdRn5+lvzN4Pjd2bNgOT2R9 8g== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agh0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:24:01 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:23:59 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:23:59 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 2E3583F7040; Fri, 28 Jun 2019 11:23:56 -0700 (PDT) From: To: , Thomas Monjalon , John McNamara , Marko Kovacevic , "Pavan Nikhilesh" , Nithin Dabilpuram , Vamsi Attunuru , "Anatoly Burakov" CC: Date: Fri, 28 Jun 2019 23:53:12 +0530 Message-ID: <20190628182354.228-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 01/42] event/octeontx2: add build infra and device probe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add the make and meson based build infrastructure along with the eventdev(SSO) device probe. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh Signed-off-by: Nithin Dabilpuram --- MAINTAINERS | 6 ++ config/common_base | 5 ++ doc/guides/eventdevs/index.rst | 1 + doc/guides/eventdevs/octeontx2.rst | 60 ++++++++++++++++ doc/guides/platform/octeontx2.rst | 3 + drivers/event/Makefile | 1 + drivers/event/meson.build | 2 +- drivers/event/octeontx2/Makefile | 39 +++++++++++ drivers/event/octeontx2/meson.build | 21 ++++++ drivers/event/octeontx2/otx2_evdev.c | 70 +++++++++++++++++++ drivers/event/octeontx2/otx2_evdev.h | 26 +++++++ .../rte_pmd_octeontx2_event_version.map | 4 ++ mk/rte.app.mk | 2 + 13 files changed, 239 insertions(+), 1 deletion(-) create mode 100644 doc/guides/eventdevs/octeontx2.rst create mode 100644 drivers/event/octeontx2/Makefile create mode 100644 drivers/event/octeontx2/meson.build create mode 100644 drivers/event/octeontx2/otx2_evdev.c create mode 100644 drivers/event/octeontx2/otx2_evdev.h create mode 100644 drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map diff --git a/MAINTAINERS b/MAINTAINERS index bbec1982c..39f12a1f2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1034,6 +1034,12 @@ Cavium OCTEON TX timvf M: Pavan Nikhilesh F: drivers/event/octeontx/timvf_* +Marvell OCTEON TX2 +M: Pavan Nikhilesh +M: Jerin Jacob +F: drivers/event/octeontx2/ +F: doc/guides/eventdevs/octeontx2.rst + NXP DPAA eventdev M: Hemant Agrawal M: Sunil Kumar Kori diff --git a/config/common_base b/config/common_base index fa1ae249a..b3bcb4c4f 100644 --- a/config/common_base +++ b/config/common_base @@ -709,6 +709,11 @@ CONFIG_RTE_LIBRTE_PMD_DSW_EVENTDEV=y # CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF=y +# +# Compile PMD for octeontx2 sso event device +# +CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV=y + # # Compile PMD for OPDL event device # diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst index f7382dc8a..570905b81 100644 --- a/doc/guides/eventdevs/index.rst +++ b/doc/guides/eventdevs/index.rst @@ -16,4 +16,5 @@ application trough the eventdev API. dsw sw octeontx + octeontx2 opdl diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst new file mode 100644 index 000000000..341c5b21d --- /dev/null +++ b/doc/guides/eventdevs/octeontx2.rst @@ -0,0 +1,60 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2019 Marvell International Ltd. + +OCTEON TX2 SSO Eventdev Driver +=============================== + +The OCTEON TX2 SSO PMD (**librte_pmd_octeontx2_event**) provides poll mode +eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2** +SoC family. + +More information about OCTEON TX2 SoC can be found at `Marvell Official Website +`_. + +Features +-------- + +Features of the OCTEON TX2 SSO PMD are: + +- 256 Event queues +- 26 (dual) and 52 (single) Event ports +- HW event scheduler +- Supports 1M flows per event queue +- Flow based event pipelining +- Flow pinning support in flow based event pipelining +- Queue based event pipelining +- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow +- Event scheduling QoS based on event queue priority +- Open system with configurable amount of outstanding events limited only by + DRAM +- HW accelerated dequeue timeout support to enable power management + +Prerequisites and Compilation procedure +--------------------------------------- + + See :doc:`../platform/octeontx2` for setup information. + +Pre-Installation Configuration +------------------------------ + +Compile time Config Options +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The following option can be modified in the ``config`` file. + +- ``CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV`` (default ``y``) + + Toggle compilation of the ``librte_pmd_octeontx2_event`` driver. + +Debugging Options +~~~~~~~~~~~~~~~~~ + +.. _table_octeontx2_event_debug_options: + +.. table:: OCTEON TX2 event device debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' | + +---+------------+-------------------------------------------------------+ diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst index c9ea45647..fbf1193e7 100644 --- a/doc/guides/platform/octeontx2.rst +++ b/doc/guides/platform/octeontx2.rst @@ -101,6 +101,9 @@ This section lists dataplane H/W block(s) available in OCTEON TX2 SoC. #. **Mempool Driver** See :doc:`../mempool/octeontx2` for NPA mempool driver information. +#. **Event Device Driver** + See :doc:`../eventdevs/octeontx2` for SSO event device driver information. + Procedure to Setup Platform --------------------------- diff --git a/drivers/event/Makefile b/drivers/event/Makefile index 03ad1b6cb..86be41b9e 100644 --- a/drivers/event/Makefile +++ b/drivers/event/Makefile @@ -8,6 +8,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton DIRS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += sw DIRS-$(CONFIG_RTE_LIBRTE_PMD_DSW_EVENTDEV) += dsw DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += octeontx +DIRS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += octeontx2 ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y) DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_EVENTDEV) += dpaa endif diff --git a/drivers/event/meson.build b/drivers/event/meson.build index fb723f727..50d30c53f 100644 --- a/drivers/event/meson.build +++ b/drivers/event/meson.build @@ -1,7 +1,7 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation -drivers = ['dpaa', 'dpaa2', 'opdl', 'skeleton', 'sw', 'dsw'] +drivers = ['dpaa', 'dpaa2', 'octeontx2', 'opdl', 'skeleton', 'sw', 'dsw'] if not (toolchain == 'gcc' and cc.version().version_compare('<4.8.6') and dpdk_conf.has('RTE_ARCH_ARM64')) drivers += 'octeontx' diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile new file mode 100644 index 000000000..dbf6ec22d --- /dev/null +++ b/drivers/event/octeontx2/Makefile @@ -0,0 +1,39 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_octeontx2_event.a + +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/event/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2 +CFLAGS += -O3 +CFLAGS += -DALLOW_EXPERIMENTAL_API + +ifneq ($(CONFIG_RTE_ARCH_64),y) +CFLAGS += -Wno-int-to-pointer-cast +CFLAGS += -Wno-pointer-to-int-cast +endif + +EXPORT_MAP := rte_pmd_octeontx2_event_version.map + +LIBABIVER := 1 + +# +# all source are stored in SRCS-y +# + +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c + +LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci +LDLIBS += -lrte_eventdev +LDLIBS += -lrte_common_octeontx2 + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build new file mode 100644 index 000000000..c4f442174 --- /dev/null +++ b/drivers/event/octeontx2/meson.build @@ -0,0 +1,21 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +sources = files('otx2_evdev.c') + +allow_experimental_apis = true + +extra_flags = [] +# This integrated controller runs only on a arm64 machine, remove 32bit warnings +if not dpdk_conf.get('RTE_ARCH_64') + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] +endif + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +deps += ['bus_pci', 'common_octeontx2'] diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c new file mode 100644 index 000000000..faffd3f0c --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -0,0 +1,70 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include +#include +#include +#include +#include + +#include "otx2_evdev.h" + +static int +otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + return rte_event_pmd_pci_probe(pci_drv, pci_dev, + sizeof(struct otx2_sso_evdev), + otx2_sso_init); +} + +static int +otx2_sso_remove(struct rte_pci_device *pci_dev) +{ + return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini); +} + +static const struct rte_pci_id pci_sso_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF) + }, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver pci_sso = { + .id_table = pci_sso_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA, + .probe = otx2_sso_probe, + .remove = otx2_sso_remove, +}; + +int +otx2_sso_init(struct rte_eventdev *event_dev) +{ + RTE_SET_USED(event_dev); + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return 0; +} + +int +otx2_sso_fini(struct rte_eventdev *event_dev) +{ + RTE_SET_USED(event_dev); + /* For secondary processes, nothing to be done */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return 0; +} + +RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso); +RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map); +RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h new file mode 100644 index 000000000..1df233293 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_EVDEV_H__ +#define __OTX2_EVDEV_H__ + +#include + +#include "otx2_common.h" + +#define EVENTDEV_NAME_OCTEONTX2_PMD otx2_eventdev + +#define sso_func_trace otx2_sso_dbg + +#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV +#define OTX2_SSO_MAX_VHWS (UINT8_MAX) + +struct otx2_sso_evdev { +}; + +/* Init and Fini API's */ +int otx2_sso_init(struct rte_eventdev *event_dev); +int otx2_sso_fini(struct rte_eventdev *event_dev); + +#endif /* __OTX2_EVDEV_H__ */ diff --git a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map new file mode 100644 index 000000000..41c65c8c9 --- /dev/null +++ b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map @@ -0,0 +1,4 @@ +DPDK_19.08 { + local: *; +}; + diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 81be289a8..503cecca2 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -109,6 +109,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF)$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOO _LDLIBS-y += -lrte_common_octeontx endif OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) +OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) ifeq ($(findstring y,$(OCTEONTX2-y)),y) _LDLIBS-y += -lrte_common_octeontx2 endif @@ -292,6 +293,7 @@ endif # CONFIG_RTE_LIBRTE_FSLMC_BUS _LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_MEMPOOL) += -lrte_mempool_octeontx _LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += -lrte_pmd_octeontx +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += -lrte_pmd_octeontx2_event _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPDL_EVENTDEV) += -lrte_pmd_opdl_event endif # CONFIG_RTE_LIBRTE_EVENTDEV From patchwork Fri Jun 28 18:23:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55607 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 684A34C99; Fri, 28 Jun 2019 20:24:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 263CF4C8B for ; Fri, 28 Jun 2019 20:24:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKUAw010606 for ; Fri, 28 Jun 2019 11:24:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=fjQaab3jdrqvSHvvjVkWN1iSulD8I37vJOw8kuNAw8g=; b=FoQ5sr+zjjpIxg9RQQpH+zmKMIv6wRUV9CmiUCGY1HGys6FtiOJO9qG/Wc3A5JPA+6zx 6s7cWDzCN6myKfgbY8R9KxRgHQFsrqDGF0Oae440Znuh07ti52+Myd4phlIb6GR7ry43 T0VlvhlaLztIdLIZkK5RQxV2yF4t8MLcjlDNJAzQFCyfVM2dvBXcxOsYJHF3pdzlw4xE rFuWpze6l+6+Rjl1/ufEkBQ9ISSXriNdCpRSArw3NHxzD0iQ9/U0Moh+WnTYSXTBz6Js 8sxyqOy/IyWzz2T54z6wQn5b0eipAZbNhaMrsfThZo5hJXGU3XxYN/8gnRq7H+P7AUz2 Ew== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191fp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:04 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:02 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:02 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id C786B3F7040; Fri, 28 Jun 2019 11:24:00 -0700 (PDT) From: To: , Pavan Nikhilesh CC: , Nithin Dabilpuram Date: Fri, 28 Jun 2019 23:53:13 +0530 Message-ID: <20190628182354.228-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 02/42] event/octeontx2: add init and fini for octeontx2 SSO object X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh SSO object needs to be initialized to communicate with the kernel AF driver through mbox using the common API's. Also, initialize the internal eventdev structure to defaults. Attach NPA lf to the PF if needed. Signed-off-by: Jerin Jacob Signed-off-by: Pavan Nikhilesh Signed-off-by: Nithin Dabilpuram Acked-by: Jerin Jacob --- drivers/event/octeontx2/Makefile | 2 +- drivers/event/octeontx2/meson.build | 2 +- drivers/event/octeontx2/otx2_evdev.c | 84 +++++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 22 +++++++- 4 files changed, 105 insertions(+), 5 deletions(-) diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index dbf6ec22d..36f0b2b12 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -34,6 +34,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci LDLIBS += -lrte_eventdev -LDLIBS += -lrte_common_octeontx2 +LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index c4f442174..3fc96421d 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -18,4 +18,4 @@ foreach flag: extra_flags endif endforeach -deps += ['bus_pci', 'common_octeontx2'] +deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2'] diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index faffd3f0c..08ae820b9 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -46,22 +46,102 @@ static struct rte_pci_driver pci_sso = { int otx2_sso_init(struct rte_eventdev *event_dev) { - RTE_SET_USED(event_dev); + struct free_rsrcs_rsp *rsrc_cnt; + struct rte_pci_device *pci_dev; + struct otx2_sso_evdev *dev; + int rc; + /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + dev = sso_pmd_priv(event_dev); + + pci_dev = container_of(event_dev->dev, struct rte_pci_device, device); + + /* Initialize the base otx2_dev object */ + rc = otx2_dev_init(pci_dev, dev); + if (rc < 0) { + otx2_err("Failed to initialize otx2_dev rc=%d", rc); + goto error; + } + + /* Get SSO and SSOW MSIX rsrc cnt */ + otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox); + rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt); + if (rc < 0) { + otx2_err("Unable to get free rsrc count"); + goto otx2_dev_uninit; + } + otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso, + rsrc_cnt->ssow, rsrc_cnt->npa); + + dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS); + dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP); + /* Grab the NPA LF if required */ + rc = otx2_npa_lf_init(pci_dev, dev); + if (rc < 0) { + otx2_err("Unable to init NPA lf. It might not be provisioned"); + goto otx2_dev_uninit; + } + + dev->drv_inited = true; + dev->is_timeout_deq = 0; + dev->min_dequeue_timeout_ns = USEC2NSEC(1); + dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF); + dev->max_num_events = -1; + dev->nb_event_queues = 0; + dev->nb_event_ports = 0; + + if (!dev->max_event_ports || !dev->max_event_queues) { + otx2_err("Not enough eventdev resource queues=%d ports=%d", + dev->max_event_queues, dev->max_event_ports); + rc = -ENODEV; + goto otx2_npa_lf_uninit; + } + + otx2_sso_pf_func_set(dev->pf_func); + otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d", + event_dev->data->name, dev->max_event_queues, + dev->max_event_ports); + + return 0; + +otx2_npa_lf_uninit: + otx2_npa_lf_fini(); +otx2_dev_uninit: + otx2_dev_fini(pci_dev, dev); +error: + return rc; } int otx2_sso_fini(struct rte_eventdev *event_dev) { - RTE_SET_USED(event_dev); + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct rte_pci_device *pci_dev; + /* For secondary processes, nothing to be done */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + pci_dev = container_of(event_dev->dev, struct rte_pci_device, device); + + if (!dev->drv_inited) + goto dev_fini; + + dev->drv_inited = false; + otx2_npa_lf_fini(); + +dev_fini: + if (otx2_npa_lf_active(dev)) { + otx2_info("Common resource in use by other devices"); + return -EAGAIN; + } + + otx2_dev_fini(pci_dev, dev); + return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 1df233293..4427efcad 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -8,6 +8,8 @@ #include #include "otx2_common.h" +#include "otx2_dev.h" +#include "otx2_mempool.h" #define EVENTDEV_NAME_OCTEONTX2_PMD otx2_eventdev @@ -16,8 +18,26 @@ #define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV #define OTX2_SSO_MAX_VHWS (UINT8_MAX) +#define USEC2NSEC(__us) ((__us) * 1E3) + struct otx2_sso_evdev { -}; + OTX2_DEV; /* Base class */ + uint8_t max_event_queues; + uint8_t max_event_ports; + uint8_t is_timeout_deq; + uint8_t nb_event_queues; + uint8_t nb_event_ports; + uint32_t deq_tmo_ns; + uint32_t min_dequeue_timeout_ns; + uint32_t max_dequeue_timeout_ns; + int32_t max_num_events; +} __rte_cache_aligned; + +static inline struct otx2_sso_evdev * +sso_pmd_priv(const struct rte_eventdev *event_dev) +{ + return event_dev->data->dev_private; +} /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); From patchwork Fri Jun 28 18:23:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55608 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D403F4CAF; Fri, 28 Jun 2019 20:24:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id AD1D45587 for ; Fri, 28 Jun 2019 20:24:07 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKF8f010589 for ; Fri, 28 Jun 2019 11:24:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=B/q5d4WsqQgPsIu52h+aOWrTK2IIUf28l+HFCIywza4=; b=K+H3Kzf9vmC1CsEM4Isks+S2+ZnIKIYlTkNw1YBelQZKbAkYfHKHi+DzVL2NWgCZkkML k7FVZCxUjnU1SjFVIF5JzqyF/uJ6PAo0AgGc4/gFwi74Zr0pdEL9FZrtUWer3lu4eYUm AlfP5cdQeNwQYd9fn7Jku1I9/2bgf8cx3xO2tI020f8lWYUONniAG6myLo2umYr2KImr B+o3wBQxSouTZkQOHUkrsPiQqOmgfl+20Z3wk3c3BFd8FQZ30JqV0fHMOc7tz7vm7xyR EVlsz0fn8TzIsJ3gk6SRVETKEnytr9arQ5DMKDNZDtLPXSbzEuQeHpv14KobrVkEWb+K RQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191fu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:06 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:04 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:04 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 3C9A43F7045; Fri, 28 Jun 2019 11:24:03 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:14 +0530 Message-ID: <20190628182354.228-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 03/42] event/octeontx2: add device capabilities function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add the info_get function to return details on the queues, flow, prioritization capabilities, etc. which this device has. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 31 ++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 08ae820b9..839a5ccaa 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,6 +12,36 @@ #include "otx2_evdev.h" +static void +otx2_sso_info_get(struct rte_eventdev *event_dev, + struct rte_event_dev_info *dev_info) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + + dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD); + dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns; + dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns; + dev_info->max_event_queues = dev->max_event_queues; + dev_info->max_event_queue_flows = (1ULL << 20); + dev_info->max_event_queue_priority_levels = 8; + dev_info->max_event_priority_levels = 1; + dev_info->max_event_ports = dev->max_event_ports; + dev_info->max_event_port_dequeue_depth = 1; + dev_info->max_event_port_enqueue_depth = 1; + dev_info->max_num_events = dev->max_num_events; + dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS | + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED | + RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES | + RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK | + RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | + RTE_EVENT_DEV_CAP_NONSEQ_MODE; +} + +/* Initialize and register event driver with DPDK Application */ +static struct rte_eventdev_ops otx2_sso_ops = { + .dev_infos_get = otx2_sso_info_get, +}; + static int otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { @@ -51,6 +81,7 @@ otx2_sso_init(struct rte_eventdev *event_dev) struct otx2_sso_evdev *dev; int rc; + event_dev->dev_ops = &otx2_sso_ops; /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; From patchwork Fri Jun 28 18:23:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55609 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 382375B3E; Fri, 28 Jun 2019 20:24:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 1EF1D4CAF for ; Fri, 28 Jun 2019 20:24:08 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIL8JE011336 for ; Fri, 28 Jun 2019 11:24:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ia40nH+gAUVSuh0AUCwGaM+6/TQQEJEYDR6kjsPzKqc=; b=vDj62NQh0W/Ud+3a7+EtWA+k7GnvLlOf4FrLfA3VNz8tcCCgSMFmM+Nenr/94VXDa0jm 3hmOanesvs8uwRAtg86vBvA9GGjcVlG67H/QdilnDSA14wSMjxm/z29rmggMbOv29op4 D+OWAq9KRt52PDdeIjBZVDmKHhEf4i5WdChWKVvSQulbPVUqCfKGTjfp3KkAF8zdkOPK xgqxEoSld4Jc8yMVxfPg7faEk6Ibdugp0bkn4qItMdd1h98BmB8hRb9g8EDcMwBN7Kyu zQlUQkH3MQahXYv/TcSen2f1AsOLMzlD8qMlV7RIwWEfCTOtrRRQy5vEK2V9jvu1eB8b Ww== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77aghk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:07 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:06 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 944103F7040; Fri, 28 Jun 2019 11:24:05 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:15 +0530 Message-ID: <20190628182354.228-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 04/42] event/octeontx2: add device configure function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add the device configure function that attaches the requested number of SSO GWS(event ports) and GGRP(event queues) LF's to the PF. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 258 +++++++++++++++++++++++++++ drivers/event/octeontx2/otx2_evdev.h | 10 ++ 2 files changed, 268 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 839a5ccaa..00996578a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -37,9 +37,267 @@ otx2_sso_info_get(struct rte_eventdev *event_dev, RTE_EVENT_DEV_CAP_NONSEQ_MODE; } +static int +sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type, + uint16_t nb_lf, uint8_t attach) +{ + if (attach) { + struct rsrc_attach_req *req; + + req = otx2_mbox_alloc_msg_attach_resources(mbox); + switch (type) { + case SSO_LF_GGRP: + req->sso = nb_lf; + break; + case SSO_LF_GWS: + req->ssow = nb_lf; + break; + default: + return -EINVAL; + } + req->modify = true; + if (otx2_mbox_process(mbox) < 0) + return -EIO; + } else { + struct rsrc_detach_req *req; + + req = otx2_mbox_alloc_msg_detach_resources(mbox); + switch (type) { + case SSO_LF_GGRP: + req->sso = true; + break; + case SSO_LF_GWS: + req->ssow = true; + break; + default: + return -EINVAL; + } + req->partial = true; + if (otx2_mbox_process(mbox) < 0) + return -EIO; + } + + return 0; +} + +static int +sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox, + enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc) +{ + void *rsp; + int rc; + + if (alloc) { + switch (type) { + case SSO_LF_GGRP: + { + struct sso_lf_alloc_req *req_ggrp; + req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox); + req_ggrp->hwgrps = nb_lf; + } + break; + case SSO_LF_GWS: + { + struct ssow_lf_alloc_req *req_hws; + req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox); + req_hws->hws = nb_lf; + } + break; + default: + return -EINVAL; + } + } else { + switch (type) { + case SSO_LF_GGRP: + { + struct sso_lf_free_req *req_ggrp; + req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox); + req_ggrp->hwgrps = nb_lf; + } + break; + case SSO_LF_GWS: + { + struct ssow_lf_free_req *req_hws; + req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox); + req_hws->hws = nb_lf; + } + break; + default: + return -EINVAL; + } + } + + rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0); + if (rc < 0) + return rc; + + if (alloc && type == SSO_LF_GGRP) { + struct sso_lf_alloc_rsp *rsp_ggrp = rsp; + + dev->xaq_buf_size = rsp_ggrp->xaq_buf_size; + dev->xae_waes = rsp_ggrp->xaq_wq_entries; + dev->iue = rsp_ggrp->in_unit_entries; + } + + return 0; +} + +static int +sso_configure_ports(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_mbox *mbox = dev->mbox; + uint8_t nb_lf; + int rc; + + otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports); + + nb_lf = dev->nb_event_ports; + /* Ask AF to attach required LFs. */ + rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true); + if (rc < 0) { + otx2_err("Failed to attach SSO GWS LF"); + return -ENODEV; + } + + if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) { + sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); + otx2_err("Failed to init SSO GWS LF"); + return -ENODEV; + } + + return rc; +} + +static int +sso_configure_queues(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_mbox *mbox = dev->mbox; + uint8_t nb_lf; + int rc; + + otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues); + + nb_lf = dev->nb_event_queues; + /* Ask AF to attach required LFs. */ + rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true); + if (rc < 0) { + otx2_err("Failed to attach SSO GGRP LF"); + return -ENODEV; + } + + if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) { + sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false); + otx2_err("Failed to init SSO GGRP LF"); + return -ENODEV; + } + + return rc; +} + +static void +sso_lf_teardown(struct otx2_sso_evdev *dev, + enum otx2_sso_lf_type lf_type) +{ + uint8_t nb_lf; + + switch (lf_type) { + case SSO_LF_GGRP: + nb_lf = dev->nb_event_queues; + break; + case SSO_LF_GWS: + nb_lf = dev->nb_event_ports; + break; + default: + return; + } + + sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false); + sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false); +} + +static int +otx2_sso_configure(const struct rte_eventdev *event_dev) +{ + struct rte_event_dev_config *conf = &event_dev->data->dev_conf; + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint32_t deq_tmo_ns; + int rc; + + sso_func_trace(); + deq_tmo_ns = conf->dequeue_timeout_ns; + + if (deq_tmo_ns == 0) + deq_tmo_ns = dev->min_dequeue_timeout_ns; + + if (deq_tmo_ns < dev->min_dequeue_timeout_ns || + deq_tmo_ns > dev->max_dequeue_timeout_ns) { + otx2_err("Unsupported dequeue timeout requested"); + return -EINVAL; + } + + if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) + dev->is_timeout_deq = 1; + + dev->deq_tmo_ns = deq_tmo_ns; + + if (conf->nb_event_ports > dev->max_event_ports || + conf->nb_event_queues > dev->max_event_queues) { + otx2_err("Unsupported event queues/ports requested"); + return -EINVAL; + } + + if (conf->nb_event_port_dequeue_depth > 1) { + otx2_err("Unsupported event port deq depth requested"); + return -EINVAL; + } + + if (conf->nb_event_port_enqueue_depth > 1) { + otx2_err("Unsupported event port enq depth requested"); + return -EINVAL; + } + + if (dev->nb_event_queues) { + /* Finit any previous queues. */ + sso_lf_teardown(dev, SSO_LF_GGRP); + } + if (dev->nb_event_ports) { + /* Finit any previous ports. */ + sso_lf_teardown(dev, SSO_LF_GWS); + } + + dev->nb_event_queues = conf->nb_event_queues; + dev->nb_event_ports = conf->nb_event_ports; + + if (sso_configure_ports(event_dev)) { + otx2_err("Failed to configure event ports"); + return -ENODEV; + } + + if (sso_configure_queues(event_dev) < 0) { + otx2_err("Failed to configure event queues"); + rc = -ENODEV; + goto teardown_hws; + } + + dev->configured = 1; + rte_mb(); + + return 0; + +teardown_hws: + sso_lf_teardown(dev, SSO_LF_GWS); + dev->nb_event_queues = 0; + dev->nb_event_ports = 0; + dev->configured = 0; + return rc; +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, + .dev_configure = otx2_sso_configure, }; static int diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 4427efcad..feb4ed6f4 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -20,6 +20,11 @@ #define USEC2NSEC(__us) ((__us) * 1E3) +enum otx2_sso_lf_type { + SSO_LF_GGRP, + SSO_LF_GWS +}; + struct otx2_sso_evdev { OTX2_DEV; /* Base class */ uint8_t max_event_queues; @@ -27,10 +32,15 @@ struct otx2_sso_evdev { uint8_t is_timeout_deq; uint8_t nb_event_queues; uint8_t nb_event_ports; + uint8_t configured; uint32_t deq_tmo_ns; uint32_t min_dequeue_timeout_ns; uint32_t max_dequeue_timeout_ns; int32_t max_num_events; + /* HW const */ + uint32_t xae_waes; + uint32_t xaq_buf_size; + uint32_t iue; } __rte_cache_aligned; static inline struct otx2_sso_evdev * From patchwork Fri Jun 28 18:23:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55610 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E87391B965; Fri, 28 Jun 2019 20:24:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 1BA5E58C6 for ; Fri, 28 Jun 2019 20:24:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhoN010889 for ; Fri, 28 Jun 2019 11:24:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=du1sAyKweZ7041tayY4hxO01V8utLvVR0D5QGBbEgTs=; b=sZZ8sMpXE7HBNtogySWJpwQSr2ClMawJL/efzt57T2KfaumadLUrvpFpYM1UK/Y2N40q rUxBFHioB4FwicD78BlFlrHfPmBbJEwTRa6O6JBrgqW50KXl0FYbBjQ3ibIRdSzYRvg7 Re60QyacI9Pr34lTpYh0ZNZHQ/E/X3qERUhKraXg2Vqot7GZsAfZ4QKQfoIpTHygqtlX Ph95rHFAqkTGCu3Xr0RJN7G7UiindJuvoH3XWkadnY+yofmhwvDRf//7oefwXF+imyRH Fl9ty8JQl4REcIIiuFJnlSXs9M2tA7egDOby1hP9fLUYeIwxBpxN/qn987vGzMbH4o+l iw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77aghn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:10 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:08 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id BE0703F7040; Fri, 28 Jun 2019 11:24:07 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:16 +0530 Message-ID: <20190628182354.228-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 05/42] event/octeontx2: add event queue config functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add default config, setup and release functions for event queues i.e. SSO GGRPS. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 50 ++++++++++++++++++++++++++++ drivers/event/octeontx2/otx2_evdev.h | 17 ++++++++++ 2 files changed, 67 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 00996578a..2290598d0 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -142,6 +142,13 @@ sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox, return 0; } +static void +otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) +{ + RTE_SET_USED(event_dev); + RTE_SET_USED(queue_id); +} + static int sso_configure_ports(const struct rte_eventdev *event_dev) { @@ -294,10 +301,53 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) return rc; } +static void +otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id, + struct rte_event_queue_conf *queue_conf) +{ + RTE_SET_USED(event_dev); + RTE_SET_USED(queue_id); + + queue_conf->nb_atomic_flows = (1ULL << 20); + queue_conf->nb_atomic_order_sequences = (1ULL << 20); + queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES; + queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL; +} + +static int +otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, + const struct rte_event_queue_conf *queue_conf) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_mbox *mbox = dev->mbox; + struct sso_grp_priority *req; + int rc; + + sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority); + + req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox); + req->grp = queue_id; + req->weight = 0xFF; + req->affinity = 0xFF; + /* Normalize <0-255> to <0-7> */ + req->priority = queue_conf->priority / 32; + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to set priority queue=%d", queue_id); + return rc; + } + + return 0; +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, .dev_configure = otx2_sso_configure, + .queue_def_conf = otx2_sso_queue_def_conf, + .queue_setup = otx2_sso_queue_setup, + .queue_release = otx2_sso_queue_release, }; static int diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index feb4ed6f4..b46402771 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -18,6 +18,23 @@ #define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV #define OTX2_SSO_MAX_VHWS (UINT8_MAX) +/* SSO LF register offsets (BAR2) */ +#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) +#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull) + +#define SSO_LF_GGRP_QCTL (0x20ull) +#define SSO_LF_GGRP_EXE_DIS (0x80ull) +#define SSO_LF_GGRP_INT (0x100ull) +#define SSO_LF_GGRP_INT_W1S (0x108ull) +#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull) +#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull) +#define SSO_LF_GGRP_INT_THR (0x140ull) +#define SSO_LF_GGRP_INT_CNT (0x180ull) +#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull) +#define SSO_LF_GGRP_AQ_CNT (0x1c0ull) +#define SSO_LF_GGRP_AQ_THR (0x1e0ull) +#define SSO_LF_GGRP_MISC_CNT (0x200ull) + #define USEC2NSEC(__us) ((__us) * 1E3) enum otx2_sso_lf_type { From patchwork Fri Jun 28 18:23:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55611 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 40D981B974; Fri, 28 Jun 2019 20:24:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E271C5B34 for ; Fri, 28 Jun 2019 20:24:12 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILsKJ011627 for ; Fri, 28 Jun 2019 11:24:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=hby2vVojWRN1HzXIMQjlTnFmJqVJicPR6zvzw4tUfnI=; b=qoPWnf79nc5Q81W6dFAiM54Syc7DVGmNLqjhIevTip2dTLsiYN9NwQ0Hu31xFsbR8GzT BQed3fege5UoLATUwIM/Un5g6O7RAc6zj3nukuyLomL76uI3CZkOAZw6HiANNAMHXxZU ztOQCRMrRxlxMjeh1K1CTgZrJ/8qPEDVyPxEdw1OVv+H1GtZkkdYDfUnSsvAxHAa8UBh Uby/6p5Q5npJG2gtRNC/45JoX5FReYRjGzhGl6flgh0IhlRyCSMiTG1cqW0rt45mhkEj HbPQyV24l5v5dK2/8IAcqvDvj+FFdtbq2xJdGjFy1NbVO8mdpyIoiz7h8iGD2wTSRI1s Yg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77aghs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:12 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:11 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id E169D3F7040; Fri, 28 Jun 2019 11:24:09 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:17 +0530 Message-ID: <20190628182354.228-7-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 06/42] event/octeontx2: allocate event inflight buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Allocate buffers in DRAM that hold inflight events. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/Makefile | 2 +- drivers/event/octeontx2/otx2_evdev.c | 116 ++++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 8 ++ 3 files changed, 124 insertions(+), 2 deletions(-) diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index 36f0b2b12..b3c3beccb 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -33,7 +33,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -LDLIBS += -lrte_eventdev +LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 2290598d0..fc4dbda0a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include "otx2_evdev.h" @@ -203,6 +204,107 @@ sso_configure_queues(const struct rte_eventdev *event_dev) return rc; } +static int +sso_xaq_allocate(struct otx2_sso_evdev *dev) +{ + const struct rte_memzone *mz; + struct npa_aura_s *aura; + static int reconfig_cnt; + char pool_name[RTE_MEMZONE_NAMESIZE]; + uint32_t xaq_cnt; + int rc; + + if (dev->xaq_pool) + rte_mempool_free(dev->xaq_pool); + + /* + * Allocate memory for Add work backpressure. + */ + mz = rte_memzone_lookup(OTX2_SSO_FC_NAME); + if (mz == NULL) + mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME, + OTX2_ALIGN + + sizeof(struct npa_aura_s), + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, + OTX2_ALIGN); + if (mz == NULL) { + otx2_err("Failed to allocate mem for fcmem"); + return -ENOMEM; + } + + dev->fc_iova = mz->iova; + dev->fc_mem = mz->addr; + + aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN); + memset(aura, 0, sizeof(struct npa_aura_s)); + + aura->fc_ena = 1; + aura->fc_addr = dev->fc_iova; + aura->fc_hyst_bits = 0; /* Store count on all updates */ + + /* Taken from HRM 14.3.3(4) */ + xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT; + xaq_cnt += (dev->iue / dev->xae_waes) + + (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); + + otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt); + /* Setup XAQ based on number of nb queues. */ + snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt); + dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name, + xaq_cnt, dev->xaq_buf_size, 0, 0, + rte_socket_id(), 0); + + if (dev->xaq_pool == NULL) { + otx2_err("Unable to create empty mempool."); + rte_memzone_free(mz); + return -ENOMEM; + } + + rc = rte_mempool_set_ops_byname(dev->xaq_pool, + rte_mbuf_platform_mempool_ops(), aura); + if (rc != 0) { + otx2_err("Unable to set xaqpool ops."); + goto alloc_fail; + } + + rc = rte_mempool_populate_default(dev->xaq_pool); + if (rc < 0) { + otx2_err("Unable to set populate xaqpool."); + goto alloc_fail; + } + reconfig_cnt++; + /* When SW does addwork (enqueue) check if there is space in XAQ by + * comparing fc_addr above against the xaq_lmt calculated below. + * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO + * to request XAQ to cache them even before enqueue is called. + */ + dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 * + dev->nb_event_queues); + dev->nb_xaq_cfg = xaq_cnt; + + return 0; +alloc_fail: + rte_mempool_free(dev->xaq_pool); + rte_memzone_free(mz); + return rc; +} + +static int +sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct sso_hw_setconfig *req; + + otx2_sso_dbg("Configuring XAQ for GGRPs"); + req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox); + req->npa_pf_func = otx2_npa_pf_func_get(); + req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id); + req->hwgrps = dev->nb_event_queues; + + return otx2_mbox_process(mbox); +} + static void sso_lf_teardown(struct otx2_sso_evdev *dev, enum otx2_sso_lf_type lf_type) @@ -288,11 +390,23 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) goto teardown_hws; } + if (sso_xaq_allocate(dev) < 0) { + rc = -ENOMEM; + goto teardown_hwggrp; + } + + rc = sso_ggrp_alloc_xaq(dev); + if (rc < 0) { + otx2_err("Failed to alloc xaq to ggrp %d", rc); + goto teardown_hwggrp; + } + dev->configured = 1; rte_mb(); return 0; - +teardown_hwggrp: + sso_lf_teardown(dev, SSO_LF_GGRP); teardown_hws: sso_lf_teardown(dev, SSO_LF_GWS); dev->nb_event_queues = 0; diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index b46402771..375640bca 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -17,6 +17,9 @@ #define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV #define OTX2_SSO_MAX_VHWS (UINT8_MAX) +#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc" +#define OTX2_SSO_XAQ_SLACK (8) +#define OTX2_SSO_XAQ_CACHE_CNT (0x7) /* SSO LF register offsets (BAR2) */ #define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) @@ -54,6 +57,11 @@ struct otx2_sso_evdev { uint32_t min_dequeue_timeout_ns; uint32_t max_dequeue_timeout_ns; int32_t max_num_events; + uint64_t *fc_mem; + uint64_t xaq_lmt; + uint64_t nb_xaq_cfg; + rte_iova_t fc_iova; + struct rte_mempool *xaq_pool; /* HW const */ uint32_t xae_waes; uint32_t xaq_buf_size; From patchwork Fri Jun 28 18:23:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55612 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D00531B996; Fri, 28 Jun 2019 20:24:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id A49681B952 for ; Fri, 28 Jun 2019 20:24:15 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIMM3m012139; Fri, 28 Jun 2019 11:24:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=8oEf/C/4SuAfjbvCHSz+1XITJSqJN/oovviYwip9lL4=; b=axzZqrj2yOaJGkH48JrUrVqAjBepAshfUxyhf3U1e3SOAOmIKEdMzFxuusW5GUZ+qy3b Q/FJTcvwMKXL9cWvrzI0mncKsk0undVxhZXLuLzKDXcG7P3XAtz2/5uVNd98ID0NU6UQ ims4BcroBR5Tl7idIwocfNmPLrXFBSAkTfVVKDkjM8Fjhe/UcI9TffkE0bWXLCpXZrS7 suRdd8EvR5C4AefFLt8CT1gpQ8SZ+8C0miO68AOee+gVPR7bTCurzuBCcy8dzweZ7Rbt apR7RL+HYodFkFTapWR3XTSWhjGvuQN0uah77ztHopi2zPauGY8qxKDdxhpWNAEeiqnQ XA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agj0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:24:14 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:13 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:13 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 18B983F7041; Fri, 28 Jun 2019 11:24:11 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:18 +0530 Message-ID: <20190628182354.228-8-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 07/42] event/octeontx2: add devargs for inflight buffer count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh The number of events for a *open system* event device is specified as -1 as per the eventdev specification. Since, Octeontx2 SSO inflight events are only limited by DRAM size, the xae_cnt devargs parameter is introduced to provide upper limit for in-flight events. Example: --dev "0002:0e:00.0,xae_cnt=8192" Signed-off-by: Pavan Nikhilesh Acked-by: Jerin Jacob --- doc/guides/eventdevs/octeontx2.rst | 12 ++++++++++++ drivers/event/octeontx2/Makefile | 2 +- drivers/event/octeontx2/otx2_evdev.c | 28 +++++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 11 +++++++++++ 4 files changed, 51 insertions(+), 2 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index 341c5b21d..f83cf1e9d 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -46,6 +46,18 @@ The following option can be modified in the ``config`` file. Toggle compilation of the ``librte_pmd_octeontx2_event`` driver. +Runtime Config Options +~~~~~~~~~~~~~~~~~~~~~~ + +- ``Maximum number of in-flight events`` (default ``8192``) + + In **Marvell OCTEON TX2** the max number of in-flight events are only limited + by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide + upper limit for in-flight events. + For example:: + + --dev "0002:0e:00.0,xae_cnt=16384" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index b3c3beccb..58853e1b9 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -32,7 +32,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c -LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci +LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index fc4dbda0a..94c97fc9e 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include @@ -245,7 +246,10 @@ sso_xaq_allocate(struct otx2_sso_evdev *dev) /* Taken from HRM 14.3.3(4) */ xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT; - xaq_cnt += (dev->iue / dev->xae_waes) + + if (dev->xae_cnt) + xaq_cnt += dev->xae_cnt / dev->xae_waes; + else + xaq_cnt += (dev->iue / dev->xae_waes) + (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt); @@ -464,6 +468,25 @@ static struct rte_eventdev_ops otx2_sso_ops = { .queue_release = otx2_sso_queue_release, }; +#define OTX2_SSO_XAE_CNT "xae_cnt" + +static void +sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) +{ + struct rte_kvargs *kvlist; + + if (devargs == NULL) + return; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + return; + + rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value, + &dev->xae_cnt); + + rte_kvargs_free(kvlist); +} + static int otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { @@ -553,6 +576,8 @@ otx2_sso_init(struct rte_eventdev *event_dev) goto otx2_npa_lf_uninit; } + sso_parse_devargs(dev, pci_dev->device.devargs); + otx2_sso_pf_func_set(dev->pf_func); otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d", event_dev->data->name, dev->max_event_queues, @@ -601,3 +626,4 @@ otx2_sso_fini(struct rte_eventdev *event_dev) RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso); RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map); RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "="); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 375640bca..acc8b6b3e 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -62,6 +62,8 @@ struct otx2_sso_evdev { uint64_t nb_xaq_cfg; rte_iova_t fc_iova; struct rte_mempool *xaq_pool; + /* Dev args */ + uint32_t xae_cnt; /* HW const */ uint32_t xae_waes; uint32_t xaq_buf_size; @@ -74,6 +76,15 @@ sso_pmd_priv(const struct rte_eventdev *event_dev) return event_dev->data->dev_private; } +static inline int +parse_kvargs_value(const char *key, const char *value, void *opaque) +{ + RTE_SET_USED(key); + + *(uint32_t *)opaque = (uint32_t)atoi(value); + return 0; +} + /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); From patchwork Fri Jun 28 18:23:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55613 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DEE801B9A1; Fri, 28 Jun 2019 20:24:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id DFC581B970 for ; Fri, 28 Jun 2019 20:24:18 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKCIe010202 for ; Fri, 28 Jun 2019 11:24:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=I9e7hw0FhXFSY9VKV6d7L054HuzAIz0qZZCTPkjU6wE=; b=djjUMhQxcjirC56COjY1EWecUEiGqsAQZcqcVNENLOkE2H0sX518kFN8LnnxVM8HL8pj 3q0hLXw7nnMYGT2zmW7RMQdiBZ+nfAqzDsfIkBJcshQPOoBy7QQCIRRr9J8Uf87PJtDZ oaAXMngpppPfC3F36huVJEHCxYBaZOTqGQiEvzgepMrulBO2mCKEvPshbGdrU5v6Bbqj S9i7uzdRvoRJFzJJthsisLOdKRqAdjkxTpYz8phAv6qOUm9qkj2JajENJiPl4N+pXppq 5MV0Ii/spjKxPFICeQnHOId5XanFFBbW8Vv3szaPpS0x5e+1sduGuibD9WoXXgcvAgQo rQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191gg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:18 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:16 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:16 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id E4D7F3F7040; Fri, 28 Jun 2019 11:24:14 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:19 +0530 Message-ID: <20190628182354.228-9-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 08/42] event/octeontx2: add event port config functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add default config, setup and release functions for event ports i.e. SSO GWS. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 110 ++++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 59 ++++++++++++++ 2 files changed, 168 insertions(+), 1 deletion(-) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 94c97fc9e..a6bf861fb 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -144,6 +144,12 @@ sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox, return 0; } +static void +otx2_sso_port_release(void *port) +{ + rte_free(port); +} + static void otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) { @@ -151,13 +157,24 @@ otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +static void +sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base) +{ + ws->tag_op = base + SSOW_LF_GWS_TAG; + ws->wqp_op = base + SSOW_LF_GWS_WQP; + ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK; + ws->swtp_op = base + SSOW_LF_GWS_SWTP; + ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM; + ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED; +} + static int sso_configure_ports(const struct rte_eventdev *event_dev) { struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); struct otx2_mbox *mbox = dev->mbox; uint8_t nb_lf; - int rc; + int i, rc; otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports); @@ -175,6 +192,40 @@ sso_configure_ports(const struct rte_eventdev *event_dev) return -ENODEV; } + for (i = 0; i < nb_lf; i++) { + struct otx2_ssogws *ws; + uintptr_t base; + + /* Free memory prior to re-allocation if needed */ + if (event_dev->data->ports[i] != NULL) { + ws = event_dev->data->ports[i]; + rte_free(ws); + ws = NULL; + } + + /* Allocate event port memory */ + ws = rte_zmalloc_socket("otx2_sso_ws", + sizeof(struct otx2_ssogws), + RTE_CACHE_LINE_SIZE, + event_dev->data->socket_id); + if (ws == NULL) { + otx2_err("Failed to alloc memory for port=%d", i); + rc = -ENOMEM; + break; + } + + ws->port = i; + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12); + sso_set_port_ops(ws, base); + + event_dev->data->ports[i] = ws; + } + + if (rc < 0) { + sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false); + sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); + } + return rc; } @@ -459,6 +510,60 @@ otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id, return 0; } +static void +otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id, + struct rte_event_port_conf *port_conf) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + + RTE_SET_USED(port_id); + port_conf->new_event_threshold = dev->max_num_events; + port_conf->dequeue_depth = 1; + port_conf->enqueue_depth = 1; +} + +static int +otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, + const struct rte_event_port_conf *port_conf) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0}; + uint64_t val; + uint16_t q; + + sso_func_trace("Port=%d", port_id); + RTE_SET_USED(port_conf); + + if (event_dev->data->ports[port_id] == NULL) { + otx2_err("Invalid port Id %d", port_id); + return -EINVAL; + } + + for (q = 0; q < dev->nb_event_queues; q++) { + grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12); + if (grps_base[q] == 0) { + otx2_err("Failed to get grp[%d] base addr", q); + return -EINVAL; + } + } + + /* Set get_work timeout for HWS */ + val = NSEC2USEC(dev->deq_tmo_ns) - 1; + + struct otx2_ssogws *ws = event_dev->data->ports[port_id]; + uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); + + rte_memcpy(ws->grps_base, grps_base, + sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); + ws->fc_mem = dev->fc_mem; + ws->xaq_lmt = dev->xaq_lmt; + otx2_write64(val, base + SSOW_LF_GWS_NW_TIM); + + otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]); + + return 0; +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, @@ -466,6 +571,9 @@ static struct rte_eventdev_ops otx2_sso_ops = { .queue_def_conf = otx2_sso_queue_def_conf, .queue_setup = otx2_sso_queue_setup, .queue_release = otx2_sso_queue_release, + .port_def_conf = otx2_sso_port_def_conf, + .port_setup = otx2_sso_port_setup, + .port_release = otx2_sso_port_release, }; #define OTX2_SSO_XAE_CNT "xae_cnt" diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index acc8b6b3e..3f4931ff1 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -38,6 +38,42 @@ #define SSO_LF_GGRP_AQ_THR (0x1e0ull) #define SSO_LF_GGRP_MISC_CNT (0x200ull) +/* SSOW LF register offsets (BAR2) */ +#define SSOW_LF_GWS_LINKS (0x10ull) +#define SSOW_LF_GWS_PENDWQP (0x40ull) +#define SSOW_LF_GWS_PENDSTATE (0x50ull) +#define SSOW_LF_GWS_NW_TIM (0x70ull) +#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull) +#define SSOW_LF_GWS_INT (0x100ull) +#define SSOW_LF_GWS_INT_W1S (0x108ull) +#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull) +#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull) +#define SSOW_LF_GWS_TAG (0x200ull) +#define SSOW_LF_GWS_WQP (0x210ull) +#define SSOW_LF_GWS_SWTP (0x220ull) +#define SSOW_LF_GWS_PENDTAG (0x230ull) +#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull) +#define SSOW_LF_GWS_OP_GET_WORK (0x600ull) +#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull) +#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull) +#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull) +#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull) +#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull) +#define SSOW_LF_GWS_OP_DESCHED (0x880ull) +#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull) +#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull) +#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull) +#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull) +#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull) +#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull) +#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull) +#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull) +#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull) +#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull) + +#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK) + +#define NSEC2USEC(__ns) ((__ns) / 1E3) #define USEC2NSEC(__us) ((__us) * 1E3) enum otx2_sso_lf_type { @@ -70,6 +106,29 @@ struct otx2_sso_evdev { uint32_t iue; } __rte_cache_aligned; +#define OTX2_SSOGWS_OPS \ + /* WS ops */ \ + uintptr_t getwrk_op; \ + uintptr_t tag_op; \ + uintptr_t wqp_op; \ + uintptr_t swtp_op; \ + uintptr_t swtag_norm_op; \ + uintptr_t swtag_desched_op; \ + uint8_t cur_tt; \ + uint8_t cur_grp + +/* Event port aka GWS */ +struct otx2_ssogws { + /* Get Work Fastpath data */ + OTX2_SSOGWS_OPS; + uint8_t swtag_req; + uint8_t port; + /* Add Work Fastpath data */ + uint64_t xaq_lmt __rte_cache_aligned; + uint64_t *fc_mem; + uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; +} __rte_cache_aligned; + static inline struct otx2_sso_evdev * sso_pmd_priv(const struct rte_eventdev *event_dev) { From patchwork Fri Jun 28 18:23:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55614 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4B4665B34; Fri, 28 Jun 2019 20:24:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 59C4B1B952 for ; Fri, 28 Jun 2019 20:24:20 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILBYl011342 for ; Fri, 28 Jun 2019 11:24:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ZoSLcc9zIitQGS/1B1rMShfsy044h9up2LKYtmenP/s=; b=yQVlQC+keTMr6Gs4Aiyif6Cs96ZMfVaT/KWJX0HZ2wSivb6lX54Q4Cf9xBL5sdJmIgvE JDXOM6nToKXM/EbKgAbMH7RzgP1BGWyqH34YlpPoCMrdgVkPngGX8e+W9mitdu8NTESr +sWqQD5hAH5k8NDzLnu77Yb4zqo7rUOK3nlmMzKmvbVLkf3cuE2C1NyqfKAuH+Y1fytJ DdhoPaTqLy+wbpgLIzFSW3hfcWwP0LcnkRFuFh7+8sePkTDxMM8/P8/rtaCjzB0jHcBe /m8rNcLZgzLavWwgOWDcN+08bWHr9F7B+gtoAKnKsa/Wiy43BLp8U6dtIqyUet4SeT1t TQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agjg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:19 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:18 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:18 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 3643E3F7041; Fri, 28 Jun 2019 11:24:16 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:20 +0530 Message-ID: <20190628182354.228-10-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 09/42] event/octeontx2: support linking queues to ports X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Links between queues and ports are controlled by setting/clearing GGRP membership in SSOW_LF_GWS_GRPMSK_CHG. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 73 ++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index a6bf861fb..53e68902a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -39,6 +39,60 @@ otx2_sso_info_get(struct rte_eventdev *event_dev, RTE_EVENT_DEV_CAP_NONSEQ_MODE; } +static void +sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable) +{ + uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); + uint64_t val; + + val = queue; + val |= 0ULL << 12; /* SET 0 */ + val |= 0x8000800080000000; /* Dont modify rest of the masks */ + val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */ + + otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG); +} + +static int +otx2_sso_port_link(struct rte_eventdev *event_dev, void *port, + const uint8_t queues[], const uint8_t priorities[], + uint16_t nb_links) +{ + uint8_t port_id = 0; + uint16_t link; + + RTE_SET_USED(event_dev); + RTE_SET_USED(priorities); + for (link = 0; link < nb_links; link++) { + struct otx2_ssogws *ws = port; + + port_id = ws->port; + sso_port_link_modify(ws, queues[link], true); + } + sso_func_trace("Port=%d nb_links=%d", port_id, nb_links); + + return (int)nb_links; +} + +static int +otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port, + uint8_t queues[], uint16_t nb_unlinks) +{ + uint8_t port_id = 0; + uint16_t unlink; + + RTE_SET_USED(event_dev); + for (unlink = 0; unlink < nb_unlinks; unlink++) { + struct otx2_ssogws *ws = port; + + port_id = ws->port; + sso_port_link_modify(ws, queues[unlink], false); + } + sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks); + + return (int)nb_unlinks; +} + static int sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t attach) @@ -157,6 +211,21 @@ otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id) RTE_SET_USED(queue_id); } +static void +sso_clr_links(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + int i, j; + + for (i = 0; i < dev->nb_event_ports; i++) { + struct otx2_ssogws *ws; + + ws = event_dev->data->ports[i]; + for (j = 0; j < dev->nb_event_queues; j++) + sso_port_link_modify(ws, j, false); + } +} + static void sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base) { @@ -450,6 +519,8 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) goto teardown_hwggrp; } + /* Clear any prior port-queue mapping. */ + sso_clr_links(event_dev); rc = sso_ggrp_alloc_xaq(dev); if (rc < 0) { otx2_err("Failed to alloc xaq to ggrp %d", rc); @@ -574,6 +645,8 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_def_conf = otx2_sso_port_def_conf, .port_setup = otx2_sso_port_setup, .port_release = otx2_sso_port_release, + .port_link = otx2_sso_port_link, + .port_unlink = otx2_sso_port_unlink, }; #define OTX2_SSO_XAE_CNT "xae_cnt" From patchwork Fri Jun 28 18:23:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55615 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BEA191B9B6; Fri, 28 Jun 2019 20:24:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 8B9821B999 for ; Fri, 28 Jun 2019 20:24:23 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKClH010198 for ; Fri, 28 Jun 2019 11:24:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=3lfLm/qMhqqUFMeVWG2PbSTPg94hn9Joatr58AlZru4=; b=B5kR9VI1mZWrH9Op+aj2cWCsTk1scISMMAbGf7D+IY+bObmieObefjBtJOnYOAHPSg1B L61cjjFARRgRRYSxeIwpJgt7pADCTdbs7WeVWYIJuJ8HCnwn9Tx6cWIjcgp7gxpXSknv xXPLiX8+5gsPiI7HO/zvr62HKGCw2trSCzxJdpw4dR1XRYcLZg9Tkqia0HZsM8+QNmoa S5ymC9bvZhBAqoR/u1vAzObiVKwQszxheezLFsJQXarOnUh1E9U6gTme54gYLPAm4Cx0 QmvlWZZghiqAVqApxOQ1JlAqQwMKS2OcRXyO4/KpBfq3RVyimqitWG7ZQ5t2rw1CIPsB lw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191gp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:22 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:20 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 735063F7041; Fri, 28 Jun 2019 11:24:19 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:21 +0530 Message-ID: <20190628182354.228-11-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 10/42] event/octeontx2: support dequeue timeout tick conversion X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add function to convert dequeue timeout from ns to ticks. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 11 +++++++++++ drivers/event/octeontx2/otx2_evdev.h | 1 + 2 files changed, 12 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 53e68902a..ef6693bc5 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -635,6 +635,16 @@ otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, return 0; } +static int +otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns, + uint64_t *tmo_ticks) +{ + RTE_SET_USED(event_dev); + *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz()); + + return 0; +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, @@ -647,6 +657,7 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_release = otx2_sso_port_release, .port_link = otx2_sso_port_link, .port_unlink = otx2_sso_port_unlink, + .timeout_ticks = otx2_sso_timeout_ticks, }; #define OTX2_SSO_XAE_CNT "xae_cnt" diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 3f4931ff1..1a9de1b86 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -75,6 +75,7 @@ #define NSEC2USEC(__ns) ((__ns) / 1E3) #define USEC2NSEC(__us) ((__us) * 1E3) +#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) enum otx2_sso_lf_type { SSO_LF_GGRP, From patchwork Fri Jun 28 18:23:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55616 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DD4511B9B7; Fri, 28 Jun 2019 20:24:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 7AA141B99C for ; Fri, 28 Jun 2019 20:24:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILBYn011342 for ; Fri, 28 Jun 2019 11:24:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=PNmLe3aGudFr+eduOHs9mC1iQ2N4UEYVpQEF2h1orpw=; b=qes+Tz2ABI5kO9gmSxH1ETiBY/+D5Mr6r2X7DCP2ZSGuyar1/qgYn/u/pRPbSLF2hojs angpbv2iWMPTCOU57SDsql4VeOQAQEkc7X7dkFLq1vbMR37agKn0d4ONFA7mcmaZGT6F HVENa5FxCk88PWs2ghhvp1NDYxuloJxWn5q7bhoFJ794OEdVbrmT2Gg5aZrf8ar8Wmxn s6DhRw9fz/Wr28gIISc6LwoBCq21cYJWScepqfhRubrlrMSO6ST+bn131ArRfOw9TUSb 7chrP1WH0AH97DFczqmiSnPUIcs7pqn4i/KmjCQVvzmv7Mrp8aqcsR41qubGBqaUZX2R Rw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agjw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:24 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:23 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:23 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id ABAF83F7040; Fri, 28 Jun 2019 11:24:21 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:22 +0530 Message-ID: <20190628182354.228-12-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 11/42] event/octeontx2: add SSO GWS and GGRP IRQ handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Register and implement SSO GWS and GGRP IRQ handlers for error interrupts. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 4 +- drivers/event/octeontx2/otx2_evdev.c | 38 +++++ drivers/event/octeontx2/otx2_evdev.h | 6 + drivers/event/octeontx2/otx2_evdev_irq.c | 175 +++++++++++++++++++++++ 5 files changed, 223 insertions(+), 1 deletion(-) create mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index 58853e1b9..4f09c1fc8 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -31,6 +31,7 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index 3fc96421d..5aa8113bd 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -2,7 +2,9 @@ # Copyright(C) 2019 Marvell International Ltd. # -sources = files('otx2_evdev.c') +sources = files('otx2_evdev.c', + 'otx2_evdev_irq.c', + ) allow_experimental_apis = true diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index ef6693bc5..b92bf0407 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -13,6 +13,29 @@ #include #include "otx2_evdev.h" +#include "otx2_irq.h" + +static inline int +sso_get_msix_offsets(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint8_t nb_ports = dev->nb_event_ports; + struct otx2_mbox *mbox = dev->mbox; + struct msix_offset_rsp *msix_rsp; + int i, rc; + + /* Get SSO and SSOW MSIX vector offsets */ + otx2_mbox_alloc_msg_msix_offset(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); + + for (i = 0; i < nb_ports; i++) + dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i]; + + for (i = 0; i < dev->nb_event_queues; i++) + dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i]; + + return rc; +} static void otx2_sso_info_get(struct rte_eventdev *event_dev, @@ -491,6 +514,9 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) return -EINVAL; } + if (dev->configured) + sso_unregister_irqs(event_dev); + if (dev->nb_event_queues) { /* Finit any previous queues. */ sso_lf_teardown(dev, SSO_LF_GGRP); @@ -527,6 +553,18 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) goto teardown_hwggrp; } + rc = sso_get_msix_offsets(event_dev); + if (rc < 0) { + otx2_err("Failed to get msix offsets %d", rc); + goto teardown_hwggrp; + } + + rc = sso_register_irqs(event_dev); + if (rc < 0) { + otx2_err("Failed to register irq %d", rc); + goto teardown_hwggrp; + } + dev->configured = 1; rte_mb(); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 1a9de1b86..e1d2dcc69 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -105,6 +105,9 @@ struct otx2_sso_evdev { uint32_t xae_waes; uint32_t xaq_buf_size; uint32_t iue; + /* MSIX offsets */ + uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP]; + uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS]; } __rte_cache_aligned; #define OTX2_SSOGWS_OPS \ @@ -148,5 +151,8 @@ parse_kvargs_value(const char *key, const char *value, void *opaque) /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); +/* IRQ handlers */ +int sso_register_irqs(const struct rte_eventdev *event_dev); +void sso_unregister_irqs(const struct rte_eventdev *event_dev); #endif /* __OTX2_EVDEV_H__ */ diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c new file mode 100644 index 000000000..7df21cc24 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_irq.c @@ -0,0 +1,175 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_evdev.h" + +static void +sso_lf_irq(void *param) +{ + uintptr_t base = (uintptr_t)param; + uint64_t intr; + uint8_t ggrp; + + ggrp = (base >> 12) & 0xFF; + + intr = otx2_read64(base + SSO_LF_GGRP_INT); + if (intr == 0) + return; + + otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr); + + /* Clear interrupt */ + otx2_write64(intr, base + SSO_LF_GGRP_INT); +} + +static int +sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff, + uintptr_t base) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int rc, vec; + + vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec); + /* Enable hw interrupt */ + otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S); + + return rc; +} + +static void +ssow_lf_irq(void *param) +{ + uintptr_t base = (uintptr_t)param; + uint8_t gws = (base >> 12) & 0xFF; + uint64_t intr; + + intr = otx2_read64(base + SSOW_LF_GWS_INT); + if (intr == 0) + return; + + otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr); + + /* Clear interrupt */ + otx2_write64(intr, base + SSOW_LF_GWS_INT); +} + +static int +ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff, + uintptr_t base) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int rc, vec; + + vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec); + /* Enable hw interrupt */ + otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S); + + return rc; +} + +static void +sso_lf_unregister_irq(const struct rte_eventdev *event_dev, + uint16_t ggrp_msixoff, uintptr_t base) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int vec; + + vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C); + otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec); +} + +static void +ssow_lf_unregister_irq(const struct rte_eventdev *event_dev, + uint16_t gws_msixoff, uintptr_t base) +{ + struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int vec; + + vec = gws_msixoff + SSOW_LF_INT_VEC_IOP; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C); + otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec); +} + +int +sso_register_irqs(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + int i, rc = -EINVAL; + uint8_t nb_ports; + + nb_ports = dev->nb_event_ports; + + for (i = 0; i < dev->nb_event_queues; i++) { + if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) { + otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x", + i, dev->sso_msixoff[i]); + goto fail; + } + } + + for (i = 0; i < nb_ports; i++) { + if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) { + otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x", + i, dev->ssow_msixoff[i]); + goto fail; + } + } + + for (i = 0; i < dev->nb_event_queues; i++) { + uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | + i << 12); + rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base); + } + + for (i = 0; i < nb_ports; i++) { + uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | + i << 12); + rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i], + base); + } + +fail: + return rc; +} + +void +sso_unregister_irqs(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint8_t nb_ports; + int i; + + nb_ports = dev->nb_event_ports; + + for (i = 0; i < dev->nb_event_queues; i++) { + uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | + i << 12); + sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base); + } + + for (i = 0; i < nb_ports; i++) { + uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | + i << 12); + ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base); + } +} From patchwork Fri Jun 28 18:23:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55617 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 80B861B9BF; Fri, 28 Jun 2019 20:24:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id C687F1B99E for ; Fri, 28 Jun 2019 20:24:27 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhtq010886 for ; Fri, 28 Jun 2019 11:24:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=I/pfRF1LK9ZeneSKt0gMVYmDtRE23fH3hoddIVlUZB8=; b=uXHx0lnX96B3xlV0Ulx2FMxUSi2PtgLVa4PGEexiZMiGOP0Iln05j/czT8eqm7MOcRVT 1TS1s697BFs0IAqXVn/ZFaDkcoI5p0RHF4PMwnuFkWQTPL+/IA5nXEQdNHpIuAaY3joP pvwym184lwKWbJFNtlncuH1/W5QK84GlAdkP7PhNdHPl+UAIMiaqMy7E2dfYJVUYtuzU Q5bYuUpOXRk5LIvLCGQsSVFhzxNdTchO9NfsmQdb2l0UsBshKd7GDkdk8kOXpSZc52IQ yySNZEyb9khi/B5Dp8vBxPakUy6EN43aAxUvrk2BA21QBSw9cmKSFSOTtN2jvXOQgOuz Rg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agk4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:26 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:25 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 3C2073F7040; Fri, 28 Jun 2019 11:24:23 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:23 +0530 Message-ID: <20190628182354.228-13-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 12/42] event/octeontx2: add register dump functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add SSO GWS and GGRP register dump function to aid debugging. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 68 ++++++++++++++++++++++++++++ 1 file changed, 68 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index b92bf0407..6c37c5b5c 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -683,6 +683,72 @@ otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns, return 0; } +static void +ssogws_dump(struct otx2_ssogws *ws, FILE *f) +{ + uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); + + fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base); + fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_LINKS)); + fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_PENDWQP)); + fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_PENDSTATE)); + fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_NW_TIM)); + fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_TAG)); + fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_TAG)); + fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_SWTP)); + fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n", + otx2_read64(base + SSOW_LF_GWS_PENDTAG)); +} + +static void +ssoggrp_dump(uintptr_t base, FILE *f) +{ + fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base); + fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n", + otx2_read64(base + SSO_LF_GGRP_QCTL)); + fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n", + otx2_read64(base + SSO_LF_GGRP_XAQ_CNT)); + fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n", + otx2_read64(base + SSO_LF_GGRP_INT_THR)); + fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n", + otx2_read64(base + SSO_LF_GGRP_INT_CNT)); + fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n", + otx2_read64(base + SSO_LF_GGRP_AQ_CNT)); + fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n", + otx2_read64(base + SSO_LF_GGRP_AQ_THR)); + fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n", + otx2_read64(base + SSO_LF_GGRP_MISC_CNT)); +} + +static void +otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint8_t queue; + uint8_t port; + + /* Dump SSOW registers */ + for (port = 0; port < dev->nb_event_ports; port++) { + fprintf(f, "[%s]SSO single workslot[%d] dump\n", + __func__, port); + ssogws_dump(event_dev->data->ports[port], f); + } + + /* Dump SSO registers */ + for (queue = 0; queue < dev->nb_event_queues; queue++) { + fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue); + struct otx2_ssogws *ws = event_dev->data->ports[0]; + ssoggrp_dump(ws->grps_base[queue], f); + } +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, @@ -696,6 +762,8 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_link = otx2_sso_port_link, .port_unlink = otx2_sso_port_unlink, .timeout_ticks = otx2_sso_timeout_ticks, + + .dump = otx2_sso_dump, }; #define OTX2_SSO_XAE_CNT "xae_cnt" From patchwork Fri Jun 28 18:23:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55618 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5385D1B9D8; Fri, 28 Jun 2019 20:24:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 02B321B9B0 for ; Fri, 28 Jun 2019 20:24:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILDWl011345 for ; Fri, 28 Jun 2019 11:24:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=/f0WY+lEo1Kv/7gg2+sjut8SJajod1nRx/FgGMAMECY=; b=Cf361ROtslJiv01V+64yDa38gaVlchinvRLWjLFUZPrkII4G4dEmpB6eMOOh92sKUyPz e5CfYoLGgAlhHu0nCRMlneqDcbdEA3EQNdV4YCm/QXxV1FkfZ9PCMFmalloXDHpXSoA0 Yjjzxhup5MCp9ZNQyTYB0RI42Lf1VZF9FiTnp2n9sf6e/t+0QF+MUkqJh1mpDW+Ek/AD VpuznAXIEP8Hk1IibooOhPk1BZb+m/uY38oN6VS5Z4U0hlJ8rSt9FwZgxLmIOQOL3pzW e+CsmGhIhdvUOUaE6LP5PX5vJCcC7ky/x/WOzMBIqCKqmOSPT+5SF/ISUH4AJX9WWibG +g== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agka-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:29 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:28 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:28 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id ADA7E3F7040; Fri, 28 Jun 2019 11:24:26 -0700 (PDT) From: To: , Pavan Nikhilesh CC: , Nithin Dabilpuram Date: Fri, 28 Jun 2019 23:53:24 +0530 Message-ID: <20190628182354.228-14-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 13/42] event/octeontx2: add xstats support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support for retrieving statistics from SSO GWS and GGRP. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/event/octeontx2/otx2_evdev.c | 5 + drivers/event/octeontx2/otx2_evdev_stats.h | 242 +++++++++++++++++++++ 2 files changed, 247 insertions(+) create mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 6c37c5b5c..51220f447 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,6 +12,7 @@ #include #include +#include "otx2_evdev_stats.h" #include "otx2_evdev.h" #include "otx2_irq.h" @@ -763,6 +764,10 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_unlink = otx2_sso_port_unlink, .timeout_ticks = otx2_sso_timeout_ticks, + .xstats_get = otx2_sso_xstats_get, + .xstats_reset = otx2_sso_xstats_reset, + .xstats_get_names = otx2_sso_xstats_get_names, + .dump = otx2_sso_dump, }; diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h new file mode 100644 index 000000000..df76a1333 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_stats.h @@ -0,0 +1,242 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_EVDEV_STATS_H__ +#define __OTX2_EVDEV_STATS_H__ + +#include "otx2_evdev.h" + +struct otx2_sso_xstats_name { + const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE]; + const size_t offset; + const uint64_t mask; + const uint8_t shift; + uint64_t reset_snap[OTX2_SSO_MAX_VHGRP]; +}; + +static struct otx2_sso_xstats_name sso_hws_xstats[] = { + {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration), + 0x3FF, 0, {0} }, + {"affinity_arbitration_credits", + offsetof(struct sso_hws_stats, arbitration), + 0xF, 16, {0} }, +}; + +static struct otx2_sso_xstats_name sso_grp_xstats[] = { + {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0, + {0} }, + {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0, + 0, {0} }, + {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0, + {0} }, + {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0, + {0} }, + {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0, + {0} }, + {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0, + {0} }, + {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3, + 0, {0} }, + {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F, + 16, {0} }, + {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt), + 0xFFFFFFFF, 0, {0} }, +}; + +#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats) +#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats) + +#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS) + +static int +otx2_sso_xstats_get(const struct rte_eventdev *event_dev, + enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id, + const unsigned int ids[], uint64_t values[], unsigned int n) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_sso_xstats_name *xstats; + struct otx2_sso_xstats_name *xstat; + struct otx2_mbox *mbox = dev->mbox; + uint32_t xstats_mode_count = 0; + uint32_t start_offset = 0; + unsigned int i; + uint64_t value; + void *req_rsp; + int rc; + + switch (mode) { + case RTE_EVENT_DEV_XSTATS_DEVICE: + break; + case RTE_EVENT_DEV_XSTATS_PORT: + if (queue_port_id >= (signed int)dev->nb_event_ports) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_hws_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->hws = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + case RTE_EVENT_DEV_XSTATS_QUEUE: + if (queue_port_id >= (signed int)dev->nb_event_queues) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; + start_offset = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_grp_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->grp = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + default: + otx2_err("Invalid mode received"); + goto invalid_value; + }; + + for (i = 0; i < n && i < xstats_mode_count; i++) { + xstat = &xstats[ids[i] - start_offset]; + value = *(uint64_t *)((char *)req_rsp + xstat->offset); + value = (value >> xstat->shift) & xstat->mask; + + values[i] = value; + values[i] -= xstat->reset_snap[queue_port_id]; + } + + return i; +invalid_value: + return -EINVAL; +} + +static int +otx2_sso_xstats_reset(struct rte_eventdev *event_dev, + enum rte_event_dev_xstats_mode mode, + int16_t queue_port_id, const uint32_t ids[], uint32_t n) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_sso_xstats_name *xstats; + struct otx2_sso_xstats_name *xstat; + struct otx2_mbox *mbox = dev->mbox; + uint32_t xstats_mode_count = 0; + uint32_t start_offset = 0; + unsigned int i; + uint64_t value; + void *req_rsp; + int rc; + + switch (mode) { + case RTE_EVENT_DEV_XSTATS_DEVICE: + return 0; + case RTE_EVENT_DEV_XSTATS_PORT: + if (queue_port_id >= (signed int)dev->nb_event_ports) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_hws_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->hws = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + case RTE_EVENT_DEV_XSTATS_QUEUE: + if (queue_port_id >= (signed int)dev->nb_event_queues) + goto invalid_value; + + xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; + start_offset = OTX2_SSO_NUM_HWS_XSTATS; + xstats = sso_grp_xstats; + + req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->grp = queue_port_id; + rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp); + if (rc < 0) + goto invalid_value; + + break; + default: + otx2_err("Invalid mode received"); + goto invalid_value; + }; + + for (i = 0; i < n && i < xstats_mode_count; i++) { + xstat = &xstats[ids[i] - start_offset]; + value = *(uint64_t *)((char *)req_rsp + xstat->offset); + value = (value >> xstat->shift) & xstat->mask; + + xstat->reset_snap[queue_port_id] = value; + } + return i; +invalid_value: + return -EINVAL; +} + +static int +otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev, + enum rte_event_dev_xstats_mode mode, + uint8_t queue_port_id, + struct rte_event_dev_xstats_name *xstats_names, + unsigned int *ids, unsigned int size) +{ + struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS]; + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint32_t xstats_mode_count = 0; + uint32_t start_offset = 0; + unsigned int xidx = 0; + unsigned int i; + + for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) { + snprintf(xstats_names_copy[i].name, + sizeof(xstats_names_copy[i].name), "%s", + sso_hws_xstats[i].name); + } + + for (; i < OTX2_SSO_NUM_XSTATS; i++) { + snprintf(xstats_names_copy[i].name, + sizeof(xstats_names_copy[i].name), "%s", + sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name); + } + + switch (mode) { + case RTE_EVENT_DEV_XSTATS_DEVICE: + break; + case RTE_EVENT_DEV_XSTATS_PORT: + if (queue_port_id >= (signed int)dev->nb_event_ports) + break; + xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS; + break; + case RTE_EVENT_DEV_XSTATS_QUEUE: + if (queue_port_id >= (signed int)dev->nb_event_queues) + break; + xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS; + start_offset = OTX2_SSO_NUM_HWS_XSTATS; + break; + default: + otx2_err("Invalid mode received"); + return -EINVAL; + }; + + if (xstats_mode_count > size || !ids || !xstats_names) + return xstats_mode_count; + + for (i = 0; i < xstats_mode_count; i++) { + xidx = i + start_offset; + strncpy(xstats_names[i].name, xstats_names_copy[xidx].name, + sizeof(xstats_names[i].name)); + ids[i] = xidx; + } + + return i; +} + +#endif From patchwork Fri Jun 28 18:23:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55619 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4EA151B9E4; Fri, 28 Jun 2019 20:24:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id F18F21B9AA for ; Fri, 28 Jun 2019 20:24:32 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIMM3o012139 for ; Fri, 28 Jun 2019 11:24:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=O2uTjBn0LQv+n9Ci9T81fVqWPnsITZQvQ4VfQx9xzLI=; b=DUvG3vJHHxpd2593ZSJVm0gfhg6H8YAyZEIhORP1EjYan/GLJfX5jsaL0FLXKzHZvh58 nikI4uOTqHQejh6viVBbxpRPMX5qk3b2GdkoiTJMrHegZde4QQliu4Bp43CF8k90QJ4u ziWyuK+ssAODutovegc0/GdXZiNzQoaLl0GNQo5Vhb37BTIj+ovGULnnhESXS0nlFyln vVQZFtf6W3OdUYJOOzWgQqJsYlvNtYMNyoFf4Y9OmoAipupv6AtRwyCSo7XnH1Su8LS7 RKIKewlM9ZU7uvW1vovGBNnxEH//3aZdmsjdUfIqbrgPhavm6x5mwIWbENIuQaEuO0Kj 3A== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agkg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:32 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:30 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:30 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 1D9613F7040; Fri, 28 Jun 2019 11:24:28 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:25 +0530 Message-ID: <20190628182354.228-15-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 14/42] event/octeontx2: add SSO HW device operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add SSO HW device operations used for enqueue/dequeue. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 3 +- drivers/event/octeontx2/otx2_evdev.h | 22 +++ drivers/event/octeontx2/otx2_worker.c | 5 + drivers/event/octeontx2/otx2_worker.h | 187 ++++++++++++++++++++++++++ 5 files changed, 217 insertions(+), 1 deletion(-) create mode 100644 drivers/event/octeontx2/otx2_worker.c create mode 100644 drivers/event/octeontx2/otx2_worker.h diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index 4f09c1fc8..a3de5ca23 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index 5aa8113bd..1d2080b6d 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -2,7 +2,8 @@ # Copyright(C) 2019 Marvell International Ltd. # -sources = files('otx2_evdev.c', +sources = files('otx2_worker.c', + 'otx2_evdev.c', 'otx2_evdev_irq.c', ) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index e1d2dcc69..cccce1dea 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -82,6 +82,28 @@ enum otx2_sso_lf_type { SSO_LF_GWS }; +union otx2_sso_event { + uint64_t get_work0; + struct { + uint32_t flow_id:20; + uint32_t sub_event_type:8; + uint32_t event_type:4; + uint8_t op:2; + uint8_t rsvd:4; + uint8_t sched_type:2; + uint8_t queue_id; + uint8_t priority; + uint8_t impl_opaque; + }; +} __rte_aligned(64); + +enum { + SSO_SYNC_ORDERED, + SSO_SYNC_ATOMIC, + SSO_SYNC_UNTAGGED, + SSO_SYNC_EMPTY +}; + struct otx2_sso_evdev { OTX2_DEV; /* Base class */ uint8_t max_event_queues; diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c new file mode 100644 index 000000000..83f535d05 --- /dev/null +++ b/drivers/event/octeontx2/otx2_worker.c @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_worker.h" diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h new file mode 100644 index 000000000..f06ff064e --- /dev/null +++ b/drivers/event/octeontx2/otx2_worker.h @@ -0,0 +1,187 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_WORKER_H__ +#define __OTX2_WORKER_H__ + +#include +#include + +#include +#include "otx2_evdev.h" + +/* SSO Operations */ + +static __rte_always_inline uint16_t +otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev) +{ + union otx2_sso_event event; + uint64_t get_work1; + + otx2_write64(BIT_ULL(16) | /* wait for work. */ + 1, /* Use Mask set 0. */ + ws->getwrk_op); + +#ifdef RTE_ARCH_ARM64 + asm volatile( + " ldr %[tag], [%[tag_loc]] \n" + " ldr %[wqp], [%[wqp_loc]] \n" + " tbz %[tag], 63, done%= \n" + " sevl \n" + "rty%=: wfe \n" + " ldr %[tag], [%[tag_loc]] \n" + " ldr %[wqp], [%[wqp_loc]] \n" + " tbnz %[tag], 63, rty%= \n" + "done%=: dmb ld \n" + " prfm pldl1keep, [%[wqp]] \n" + : [tag] "=&r" (event.get_work0), + [wqp] "=&r" (get_work1) + : [tag_loc] "r" (ws->tag_op), + [wqp_loc] "r" (ws->wqp_op) + ); +#else + event.get_work0 = otx2_read64(ws->tag_op); + while ((BIT_ULL(63)) & event.get_work0) + event.get_work0 = otx2_read64(ws->tag_op); + + get_work1 = otx2_read64(ws->wqp_op); + rte_prefetch0((const void *)get_work1); +#endif + + event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | + (event.get_work0 & (0x3FFull << 36)) << 4 | + (event.get_work0 & 0xffffffff); + ws->cur_tt = event.sched_type; + ws->cur_grp = event.queue_id; + + + ev->event = event.get_work0; + ev->u64 = get_work1; + + return !!get_work1; +} + +/* Used in cleaning up workslot. */ +static __rte_always_inline uint16_t +otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev) +{ + union otx2_sso_event event; + uint64_t get_work1; + +#ifdef RTE_ARCH_ARM64 + asm volatile( + " ldr %[tag], [%[tag_loc]] \n" + " ldr %[wqp], [%[wqp_loc]] \n" + " tbz %[tag], 63, done%= \n" + " sevl \n" + "rty%=: wfe \n" + " ldr %[tag], [%[tag_loc]] \n" + " ldr %[wqp], [%[wqp_loc]] \n" + " tbnz %[tag], 63, rty%= \n" + "done%=: dmb ld \n" + " prfm pldl1keep, [%[wqp]] \n" + : [tag] "=&r" (event.get_work0), + [wqp] "=&r" (get_work1) + : [tag_loc] "r" (ws->tag_op), + [wqp_loc] "r" (ws->wqp_op) + ); +#else + event.get_work0 = otx2_read64(ws->tag_op); + while ((BIT_ULL(63)) & event.get_work0) + event.get_work0 = otx2_read64(ws->tag_op); + + get_work1 = otx2_read64(ws->wqp_op); + rte_prefetch0((const void *)get_work1); +#endif + + event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | + (event.get_work0 & (0x3FFull << 36)) << 4 | + (event.get_work0 & 0xffffffff); + ws->cur_tt = event.sched_type; + ws->cur_grp = event.queue_id; + + ev->event = event.get_work0; + ev->u64 = get_work1; + + return !!get_work1; +} + +static __rte_always_inline void +otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr, + const uint32_t tag, const uint8_t new_tt, + const uint16_t grp) +{ + uint64_t add_work0; + + add_work0 = tag | ((uint64_t)(new_tt) << 32); + otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]); +} + +static __rte_always_inline void +otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt, + uint16_t grp) +{ + uint64_t val; + + val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34); + otx2_write64(val, ws->swtag_desched_op); +} + +static __rte_always_inline void +otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt) +{ + uint64_t val; + + val = tag | ((uint64_t)(new_tt & 0x3) << 32); + otx2_write64(val, ws->swtag_norm_op); +} + +static __rte_always_inline void +otx2_ssogws_swtag_untag(struct otx2_ssogws *ws) +{ + otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_SWTAG_UNTAG); + ws->cur_tt = SSO_SYNC_UNTAGGED; +} + +static __rte_always_inline void +otx2_ssogws_swtag_flush(struct otx2_ssogws *ws) +{ + otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_SWTAG_FLUSH); + ws->cur_tt = SSO_SYNC_EMPTY; +} + +static __rte_always_inline void +otx2_ssogws_desched(struct otx2_ssogws *ws) +{ + otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_DESCHED); +} + +static __rte_always_inline void +otx2_ssogws_swtag_wait(struct otx2_ssogws *ws) +{ +#ifdef RTE_ARCH_ARM64 + uint64_t swtp; + + asm volatile ( + " ldr %[swtb], [%[swtp_loc]] \n" + " cbz %[swtb], done%= \n" + " sevl \n" + "rty%=: wfe \n" + " ldr %[swtb], [%[swtp_loc]] \n" + " cbnz %[swtb], rty%= \n" + "done%=: \n" + : [swtb] "=&r" (swtp) + : [swtp_loc] "r" (ws->swtp_op) + ); +#else + /* Wait for the SWTAG/SWTAG_FULL operation */ + while (otx2_read64(ws->swtp_op)) + ; +#endif +} + +#endif From patchwork Fri Jun 28 18:23:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55620 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9C7291B9EB; Fri, 28 Jun 2019 20:24:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 7D3831B9AA for ; Fri, 28 Jun 2019 20:24:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKCvb010205 for ; Fri, 28 Jun 2019 11:24:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=pAIiYTF3ogb6rrJBNNLflMPLkPydsQ6117vJzllago4=; b=spr+qUfrogTssJsekUQeD9jV0nHd44pAhSmjKwyxaX9N+aTzo5nRFLv//DoJdR43spyR grECbzXY+Zs3aAhLGCTkp8kbABKvdyLxwhEU+S8ZvZc/lrUcUKKpPYch1uUACb+/Ka2N oI23EpDF6y+FQhZWLuxGVfBw52/hJu4rr7QKE3n9Ah639zkRsSWt0v5PJEw8s+J96ROa KEdXiutLfAIq1K+QJSNerk9xE3EDa/uIs19dqvXtFOj2oHO0jcp+UZIrTDEAkNeynrNe k2A4tc5B8wVZIxt6T0hvb7eNT+Qhc2fVhNWKc0a6i6hjJUbaGqNmmk90A6EJJ5GmNwTB mg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191h5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:35 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:32 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id AA00D3F7040; Fri, 28 Jun 2019 11:24:31 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:26 +0530 Message-ID: <20190628182354.228-16-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 15/42] event/octeontx2: add worker enqueue functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add worker event enqueue functions. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.h | 8 ++ drivers/event/octeontx2/otx2_worker.c | 136 ++++++++++++++++++++++++++ 2 files changed, 144 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index cccce1dea..4f2fd33df 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -170,6 +170,14 @@ parse_kvargs_value(const char *key, const char *value, void *opaque) return 0; } +uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev); +uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); +uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); +uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); + /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c index 83f535d05..044c5f132 100644 --- a/drivers/event/octeontx2/otx2_worker.c +++ b/drivers/event/octeontx2/otx2_worker.c @@ -3,3 +3,139 @@ */ #include "otx2_worker.h" + +static __rte_noinline uint8_t +otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint64_t event_ptr = ev->u64; + const uint16_t grp = ev->queue_id; + + if (ws->xaq_lmt <= *ws->fc_mem) + return 0; + + otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp); + + return 1; +} + +static __rte_always_inline void +otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint8_t cur_tt = ws->cur_tt; + + /* 96XX model + * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED + * + * SSO_SYNC_ORDERED norm norm untag + * SSO_SYNC_ATOMIC norm norm untag + * SSO_SYNC_UNTAGGED norm norm NOOP + */ + + if (new_tt == SSO_SYNC_UNTAGGED) { + if (cur_tt != SSO_SYNC_UNTAGGED) + otx2_ssogws_swtag_untag(ws); + } else { + otx2_ssogws_swtag_norm(ws, tag, new_tt); + } + + ws->swtag_req = 1; +} + +static __rte_always_inline void +otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev, + const uint16_t grp) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + + otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_UPD_WQP_GRP1); + rte_smp_wmb(); + otx2_ssogws_swtag_desched(ws, tag, new_tt, grp); +} + +static __rte_always_inline void +otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev) +{ + const uint8_t grp = ev->queue_id; + + /* Group hasn't changed, Use SWTAG to forward the event */ + if (ws->cur_grp == grp) + otx2_ssogws_fwd_swtag(ws, ev); + else + /* + * Group has been changed for group based work pipelining, + * Use deschedule/add_work operation to transfer the event to + * new group/core + */ + otx2_ssogws_fwd_group(ws, ev, grp); +} + +static __rte_always_inline void +otx2_ssogws_release_event(struct otx2_ssogws *ws) +{ + otx2_ssogws_swtag_flush(ws); +} + +uint16_t __hot +otx2_ssogws_enq(void *port, const struct rte_event *ev) +{ + struct otx2_ssogws *ws = port; + + switch (ev->op) { + case RTE_EVENT_OP_NEW: + rte_smp_mb(); + return otx2_ssogws_new_event(ws, ev); + case RTE_EVENT_OP_FORWARD: + otx2_ssogws_forward_event(ws, ev); + break; + case RTE_EVENT_OP_RELEASE: + otx2_ssogws_release_event(ws); + break; + default: + return 0; + } + + return 1; +} + +uint16_t __hot +otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + RTE_SET_USED(nb_events); + return otx2_ssogws_enq(port, ev); +} + +uint16_t __hot +otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + uint16_t i, rc = 1; + + rte_smp_mb(); + if (ws->xaq_lmt <= *ws->fc_mem) + return 0; + + for (i = 0; i < nb_events && rc; i++) + rc = otx2_ssogws_new_event(ws, &ev[i]); + + return nb_events; +} + +uint16_t __hot +otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + otx2_ssogws_forward_event(ws, ev); + + return 1; +} From patchwork Fri Jun 28 18:23:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55621 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4D8211B9E7; Fri, 28 Jun 2019 20:25:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BECA81B9B0 for ; Fri, 28 Jun 2019 20:24:36 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILDWo011345 for ; Fri, 28 Jun 2019 11:24:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=sWd14JhzqMBho9qkdL8JkD7C4gk8kAEdxbsvjTpS9D8=; b=exC2Qxc0ZfVF4T2g6CxG8fxpt+U6iKRpmbzLc3axxrR+gu7i5PKwUvWPNpz6GHQgcRDv o1WKb4I5lFvux+d5bBGUCJrqxeZ/pujQEuUSTRf8KgMbu16xvIcXXx/CoXSpL37EwjH3 qB2t134i6FnPfPha4WguOHdvDQaZ17iJ2sxMZXSYpO4ft79neX83TYlXlEDf4J/ApM9V mquB6VVDq+AIYcYyaoIultBGHEIJiumyofUI2eicuvihuWL+wy1RhD387doWRrrop0v4 OJG6Ura0NlhNZtySNeONQztcxb9dW9iMdNZV+Vhq9JMqkgX4rftrtZnsXiyXsrZMR3Av 8A== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agkr-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:35 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:34 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:34 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id C1AD43F7040; Fri, 28 Jun 2019 11:24:33 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:27 +0530 Message-ID: <20190628182354.228-17-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 16/42] event/octeontx2: add worker dequeue functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add worker event dequeue functions. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.h | 10 +++++ drivers/event/octeontx2/otx2_worker.c | 55 +++++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 4f2fd33df..6f8d709b6 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -178,6 +178,16 @@ uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +uint16_t otx2_ssogws_deq(void *port, struct rte_event *ev, + uint64_t timeout_ticks); +uint16_t otx2_ssogws_deq_burst(void *port, struct rte_event ev[], + uint16_t nb_events, uint64_t timeout_ticks); +uint16_t otx2_ssogws_deq_timeout(void *port, struct rte_event *ev, + uint64_t timeout_ticks); +uint16_t otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[], + uint16_t nb_events, + uint64_t timeout_ticks); + /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c index 044c5f132..edc574673 100644 --- a/drivers/event/octeontx2/otx2_worker.c +++ b/drivers/event/octeontx2/otx2_worker.c @@ -81,6 +81,61 @@ otx2_ssogws_release_event(struct otx2_ssogws *ws) otx2_ssogws_swtag_flush(ws); } +uint16_t __hot +otx2_ssogws_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(timeout_ticks); + + if (ws->swtag_req) { + ws->swtag_req = 0; + otx2_ssogws_swtag_wait(ws); + return 1; + } + + return otx2_ssogws_get_work(ws, ev); +} + +uint16_t __hot +otx2_ssogws_deq_burst(void *port, struct rte_event ev[], uint16_t nb_events, + uint64_t timeout_ticks) +{ + RTE_SET_USED(nb_events); + + return otx2_ssogws_deq(port, ev, timeout_ticks); +} + +uint16_t __hot +otx2_ssogws_deq_timeout(void *port, struct rte_event *ev, + uint64_t timeout_ticks) +{ + struct otx2_ssogws *ws = port; + uint16_t ret = 1; + uint64_t iter; + + if (ws->swtag_req) { + ws->swtag_req = 0; + otx2_ssogws_swtag_wait(ws); + return ret; + } + + ret = otx2_ssogws_get_work(ws, ev); + for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) + ret = otx2_ssogws_get_work(ws, ev); + + return ret; +} + +uint16_t __hot +otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[], + uint16_t nb_events, uint64_t timeout_ticks) +{ + RTE_SET_USED(nb_events); + + return otx2_ssogws_deq_timeout(port, ev, timeout_ticks); +} + uint16_t __hot otx2_ssogws_enq(void *port, const struct rte_event *ev) { From patchwork Fri Jun 28 18:23:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55622 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DCBC74CA9; Fri, 28 Jun 2019 20:25:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 354751B9B7 for ; Fri, 28 Jun 2019 20:24:39 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIMM3p012139 for ; Fri, 28 Jun 2019 11:24:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=8Am0ILCtdE3oj6/ChaA6Le2VTSz4VLIr6Bvg8SQNTYA=; b=iqHaRoRSODAAXaP2BwQCZbpFNg0iNQ9ELYdPI7DtnimQDl95qO/WcHUxTPV7XXzOEhsQ epHFTEJhm8bUJmTEZwC8FJoZa82+MtgdwX2ur0VWR0exKPsefx5xT1YRNnjZ37zVwxGa gVGKEKBX5Z3APpdSzKRURMp/Nsb7xel/z7CGebH1Z77ik54DmJNFNDkv1mTvtIamJj6D 8vkbxN+f2CjCMGPlVtI312PC/Th2mmg+80cp3lCwaplVDuU1E3ZPn7+dmuqpLj6Uh8/N MZ0Eh7CdZLKafDyzgo+XwobVfIKks/8t7dr/Siav5eiyPZsYoV8JRRLSCxymcXEjrcIq 5A== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agm0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:37 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:37 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id E31923F7040; Fri, 28 Jun 2019 11:24:35 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:28 +0530 Message-ID: <20190628182354.228-18-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 17/42] event/octeontx2: add octeontx2 SSO dual workslot mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh OcteonTx2 AP core SSO cache contains two entires each entry caches state of an single GWS aka event port. AP core requests events from SSO by using following sequence : 1. Write to SSOW_LF_GWS_OP_GET_WORK 2. Wait for SSO to complete scheduling by polling on SSOW_LF_GWS_TAG[63] 3. SSO notifies core by clearing SSOW_LF_GWS_TAG[63] and if work is valid SSOW_LF_GWS_WQP is non-zero. The above sequence uses only one in-core cache entry. In dual workslot mode we try to use both the in-core cache entries by triggering GET_WORK on a second workslot as soon as the above sequence completes. This effectively hides the schedule latency of SSO if there are enough events with unique flow_tags in-flight. This mode reserves two SSO GWS lf's for each event port effectively doubling single core performance. Dual workslot mode is the default mode of operation in octeontx2. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- drivers/event/octeontx2/otx2_evdev.c | 204 ++++++++++++++++++--- drivers/event/octeontx2/otx2_evdev.h | 17 ++ drivers/event/octeontx2/otx2_evdev_irq.c | 4 +- drivers/event/octeontx2/otx2_evdev_stats.h | 52 +++++- 4 files changed, 242 insertions(+), 35 deletions(-) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 51220f447..16d5e7dfa 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -20,7 +20,7 @@ static inline int sso_get_msix_offsets(const struct rte_eventdev *event_dev) { struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); - uint8_t nb_ports = dev->nb_event_ports; + uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1); struct otx2_mbox *mbox = dev->mbox; struct msix_offset_rsp *msix_rsp; int i, rc; @@ -82,16 +82,26 @@ otx2_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], const uint8_t priorities[], uint16_t nb_links) { + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); uint8_t port_id = 0; uint16_t link; - RTE_SET_USED(event_dev); RTE_SET_USED(priorities); for (link = 0; link < nb_links; link++) { - struct otx2_ssogws *ws = port; - - port_id = ws->port; - sso_port_link_modify(ws, queues[link], true); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = port; + + port_id = ws->port; + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[0], queues[link], true); + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[1], queues[link], true); + } else { + struct otx2_ssogws *ws = port; + + port_id = ws->port; + sso_port_link_modify(ws, queues[link], true); + } } sso_func_trace("Port=%d nb_links=%d", port_id, nb_links); @@ -102,15 +112,27 @@ static int otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[], uint16_t nb_unlinks) { + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); uint8_t port_id = 0; uint16_t unlink; - RTE_SET_USED(event_dev); for (unlink = 0; unlink < nb_unlinks; unlink++) { - struct otx2_ssogws *ws = port; - - port_id = ws->port; - sso_port_link_modify(ws, queues[unlink], false); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = port; + + port_id = ws->port; + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[0], queues[unlink], + false); + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[1], queues[unlink], + false); + } else { + struct otx2_ssogws *ws = port; + + port_id = ws->port; + sso_port_link_modify(ws, queues[unlink], false); + } } sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks); @@ -242,11 +264,23 @@ sso_clr_links(const struct rte_eventdev *event_dev) int i, j; for (i = 0; i < dev->nb_event_ports; i++) { - struct otx2_ssogws *ws; + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws; - ws = event_dev->data->ports[i]; - for (j = 0; j < dev->nb_event_queues; j++) - sso_port_link_modify(ws, j, false); + ws = event_dev->data->ports[i]; + for (j = 0; j < dev->nb_event_queues; j++) { + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[0], j, false); + sso_port_link_modify((struct otx2_ssogws *) + &ws->ws_state[1], j, false); + } + } else { + struct otx2_ssogws *ws; + + ws = event_dev->data->ports[i]; + for (j = 0; j < dev->nb_event_queues; j++) + sso_port_link_modify(ws, j, false); + } } } @@ -261,6 +295,73 @@ sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base) ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED; } +static int +sso_configure_dual_ports(const struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct otx2_mbox *mbox = dev->mbox; + uint8_t vws = 0; + uint8_t nb_lf; + int i, rc; + + otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports); + + nb_lf = dev->nb_event_ports * 2; + /* Ask AF to attach required LFs. */ + rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true); + if (rc < 0) { + otx2_err("Failed to attach SSO GWS LF"); + return -ENODEV; + } + + if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) { + sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); + otx2_err("Failed to init SSO GWS LF"); + return -ENODEV; + } + + for (i = 0; i < dev->nb_event_ports; i++) { + struct otx2_ssogws_dual *ws; + uintptr_t base; + + /* Free memory prior to re-allocation if needed */ + if (event_dev->data->ports[i] != NULL) { + ws = event_dev->data->ports[i]; + rte_free(ws); + ws = NULL; + } + + /* Allocate event port memory */ + ws = rte_zmalloc_socket("otx2_sso_ws", + sizeof(struct otx2_ssogws_dual), + RTE_CACHE_LINE_SIZE, + event_dev->data->socket_id); + if (ws == NULL) { + otx2_err("Failed to alloc memory for port=%d", i); + rc = -ENOMEM; + break; + } + + ws->port = i; + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12); + sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base); + vws++; + + base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12); + sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base); + vws++; + + event_dev->data->ports[i] = ws; + } + + if (rc < 0) { + sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false); + sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false); + } + + return rc; +} + static int sso_configure_ports(const struct rte_eventdev *event_dev) { @@ -465,6 +566,7 @@ sso_lf_teardown(struct otx2_sso_evdev *dev, break; case SSO_LF_GWS: nb_lf = dev->nb_event_ports; + nb_lf *= dev->dual_ws ? 2 : 1; break; default: return; @@ -530,7 +632,12 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) dev->nb_event_queues = conf->nb_event_queues; dev->nb_event_ports = conf->nb_event_ports; - if (sso_configure_ports(event_dev)) { + if (dev->dual_ws) + rc = sso_configure_dual_ports(event_dev); + else + rc = sso_configure_ports(event_dev); + + if (rc < 0) { otx2_err("Failed to configure event ports"); return -ENODEV; } @@ -660,14 +767,27 @@ otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id, /* Set get_work timeout for HWS */ val = NSEC2USEC(dev->deq_tmo_ns) - 1; - struct otx2_ssogws *ws = event_dev->data->ports[port_id]; - uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); - - rte_memcpy(ws->grps_base, grps_base, - sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); - ws->fc_mem = dev->fc_mem; - ws->xaq_lmt = dev->xaq_lmt; - otx2_write64(val, base + SSOW_LF_GWS_NW_TIM); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id]; + + rte_memcpy(ws->grps_base, grps_base, + sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); + ws->fc_mem = dev->fc_mem; + ws->xaq_lmt = dev->xaq_lmt; + otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR( + ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM); + otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR( + ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM); + } else { + struct otx2_ssogws *ws = event_dev->data->ports[port_id]; + uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); + + rte_memcpy(ws->grps_base, grps_base, + sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP); + ws->fc_mem = dev->fc_mem; + ws->xaq_lmt = dev->xaq_lmt; + otx2_write64(val, base + SSOW_LF_GWS_NW_TIM); + } otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]); @@ -735,18 +855,37 @@ otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f) uint8_t queue; uint8_t port; + fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ? + "dual_ws" : "single_ws"); /* Dump SSOW registers */ for (port = 0; port < dev->nb_event_ports; port++) { - fprintf(f, "[%s]SSO single workslot[%d] dump\n", - __func__, port); - ssogws_dump(event_dev->data->ports[port], f); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = + event_dev->data->ports[port]; + + fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n", + __func__, port, 0); + ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f); + fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n", + __func__, port, 1); + ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f); + } else { + fprintf(f, "[%s]SSO single workslot[%d] dump\n", + __func__, port); + ssogws_dump(event_dev->data->ports[port], f); + } } /* Dump SSO registers */ for (queue = 0; queue < dev->nb_event_queues; queue++) { fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue); - struct otx2_ssogws *ws = event_dev->data->ports[0]; - ssoggrp_dump(ws->grps_base[queue], f); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = event_dev->data->ports[0]; + ssoggrp_dump(ws->grps_base[queue], f); + } else { + struct otx2_ssogws *ws = event_dev->data->ports[0]; + ssoggrp_dump(ws->grps_base[queue], f); + } } } @@ -879,7 +1018,14 @@ otx2_sso_init(struct rte_eventdev *event_dev) goto otx2_npa_lf_uninit; } + dev->dual_ws = 1; sso_parse_devargs(dev, pci_dev->device.devargs); + if (dev->dual_ws) { + otx2_sso_dbg("Using dual workslot mode"); + dev->max_event_ports = dev->max_event_ports / 2; + } else { + otx2_sso_dbg("Using single workslot mode"); + } otx2_sso_pf_func_set(dev->pf_func); otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d", diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 6f8d709b6..72de9ace5 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -121,6 +121,7 @@ struct otx2_sso_evdev { uint64_t nb_xaq_cfg; rte_iova_t fc_iova; struct rte_mempool *xaq_pool; + uint8_t dual_ws; /* Dev args */ uint32_t xae_cnt; /* HW const */ @@ -155,6 +156,22 @@ struct otx2_ssogws { uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; } __rte_cache_aligned; +struct otx2_ssogws_state { + OTX2_SSOGWS_OPS; +}; + +struct otx2_ssogws_dual { + /* Get Work Fastpath data */ + struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */ + uint8_t swtag_req; + uint8_t vws; /* Ping pong bit */ + uint8_t port; + /* Add Work Fastpath data */ + uint64_t xaq_lmt __rte_cache_aligned; + uint64_t *fc_mem; + uintptr_t grps_base[OTX2_SSO_MAX_VHGRP]; +} __rte_cache_aligned; + static inline struct otx2_sso_evdev * sso_pmd_priv(const struct rte_eventdev *event_dev) { diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c index 7df21cc24..7379bb17f 100644 --- a/drivers/event/octeontx2/otx2_evdev_irq.c +++ b/drivers/event/octeontx2/otx2_evdev_irq.c @@ -117,7 +117,7 @@ sso_register_irqs(const struct rte_eventdev *event_dev) int i, rc = -EINVAL; uint8_t nb_ports; - nb_ports = dev->nb_event_ports; + nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1); for (i = 0; i < dev->nb_event_queues; i++) { if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) { @@ -159,7 +159,7 @@ sso_unregister_irqs(const struct rte_eventdev *event_dev) uint8_t nb_ports; int i; - nb_ports = dev->nb_event_ports; + nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1); for (i = 0; i < dev->nb_event_queues; i++) { uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h index df76a1333..9d7c694ee 100644 --- a/drivers/event/octeontx2/otx2_evdev_stats.h +++ b/drivers/event/octeontx2/otx2_evdev_stats.h @@ -76,11 +76,29 @@ otx2_sso_xstats_get(const struct rte_eventdev *event_dev, xstats = sso_hws_xstats; req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->hws = queue_port_id; + ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ? + 2 * queue_port_id : queue_port_id; rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); if (rc < 0) goto invalid_value; + if (dev->dual_ws) { + for (i = 0; i < n && i < xstats_mode_count; i++) { + xstat = &xstats[ids[i] - start_offset]; + values[i] = *(uint64_t *) + ((char *)req_rsp + xstat->offset); + values[i] = (values[i] >> xstat->shift) & + xstat->mask; + } + + req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->hws = + (2 * queue_port_id) + 1; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + } + break; case RTE_EVENT_DEV_XSTATS_QUEUE: if (queue_port_id >= (signed int)dev->nb_event_queues) @@ -107,7 +125,11 @@ otx2_sso_xstats_get(const struct rte_eventdev *event_dev, value = *(uint64_t *)((char *)req_rsp + xstat->offset); value = (value >> xstat->shift) & xstat->mask; - values[i] = value; + if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws) + values[i] += value; + else + values[i] = value; + values[i] -= xstat->reset_snap[queue_port_id]; } @@ -143,11 +165,30 @@ otx2_sso_xstats_reset(struct rte_eventdev *event_dev, xstats = sso_hws_xstats; req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); - ((struct sso_info_req *)req_rsp)->hws = queue_port_id; + ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ? + 2 * queue_port_id : queue_port_id; rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); if (rc < 0) goto invalid_value; + if (dev->dual_ws) { + for (i = 0; i < n && i < xstats_mode_count; i++) { + xstat = &xstats[ids[i] - start_offset]; + xstat->reset_snap[queue_port_id] = *(uint64_t *) + ((char *)req_rsp + xstat->offset); + xstat->reset_snap[queue_port_id] = + (xstat->reset_snap[queue_port_id] >> + xstat->shift) & xstat->mask; + } + + req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox); + ((struct sso_info_req *)req_rsp)->hws = + (2 * queue_port_id) + 1; + rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp); + if (rc < 0) + goto invalid_value; + } + break; case RTE_EVENT_DEV_XSTATS_QUEUE: if (queue_port_id >= (signed int)dev->nb_event_queues) @@ -174,7 +215,10 @@ otx2_sso_xstats_reset(struct rte_eventdev *event_dev, value = *(uint64_t *)((char *)req_rsp + xstat->offset); value = (value >> xstat->shift) & xstat->mask; - xstat->reset_snap[queue_port_id] = value; + if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws) + xstat->reset_snap[queue_port_id] += value; + else + xstat->reset_snap[queue_port_id] = value; } return i; invalid_value: From patchwork Fri Jun 28 18:23:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55623 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 496341B9F4; Fri, 28 Jun 2019 20:25:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 036ED1B955 for ; Fri, 28 Jun 2019 20:24:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILhn1011549 for ; Fri, 28 Jun 2019 11:24:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=utjSNRC0hhjBwgRykYmtClWSiMmGih2ASsCnKSfb7IE=; b=gPUa1+7J432OIjLPcZIjD4Y4KVjTeDeLzezobih10p8Y0K+YECZIj2l8Wlhf04mAFk5G ktA6Hq+OweQzQyqB37+lGB8FLJk2h0jKxlvm9d3mjapROgHeLuA/IsKKj2jpUma1EJex Tz+QQ3n7b4HSip6wa8fJzuA3RRCKCdnKVOTebvmo0npuXWyeKQRJZJ3YGZxgwSmpzVjk Vf8buh7U7VJNygL8HnB9HS0YAB29trMINovWzUlecJbwEMI4CaQbUECbFYiYqtRG4gwB l13I+kAGE6GuuozP34JNgs07kMpzkKFIrhlf0f2+Xrk8sjJxCILc3+3o2kaADYRnoWX1 CA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agmc-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:41 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:39 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 335433F7040; Fri, 28 Jun 2019 11:24:37 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:29 +0530 Message-ID: <20190628182354.228-19-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 18/42] event/octeontx2: add SSO dual GWS HW device operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add SSO dual workslot mode GWS HW device operations. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 1 + drivers/event/octeontx2/otx2_worker_dual.c | 6 ++ drivers/event/octeontx2/otx2_worker_dual.h | 76 ++++++++++++++++++++++ 4 files changed, 84 insertions(+) create mode 100644 drivers/event/octeontx2/otx2_worker_dual.c create mode 100644 drivers/event/octeontx2/otx2_worker_dual.h diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index a3de5ca23..dfecda599 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index 1d2080b6d..c2a5f3e3d 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files('otx2_worker.c', + 'otx2_worker_dual.c', 'otx2_evdev.c', 'otx2_evdev_irq.c', ) diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c new file mode 100644 index 000000000..f762436aa --- /dev/null +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -0,0 +1,6 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_worker_dual.h" +#include "otx2_worker.h" diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h new file mode 100644 index 000000000..d8453d1f7 --- /dev/null +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_WORKER_DUAL_H__ +#define __OTX2_WORKER_DUAL_H__ + +#include +#include + +#include +#include "otx2_evdev.h" + +/* SSO Operations */ +static __rte_always_inline uint16_t +otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws, + struct otx2_ssogws_state *ws_pair, + struct rte_event *ev) +{ + const uint64_t set_gw = BIT_ULL(16) | 1; + union otx2_sso_event event; + uint64_t get_work1; + +#ifdef RTE_ARCH_ARM64 + asm volatile( + " ldr %[tag], [%[tag_loc]] \n" + " ldr %[wqp], [%[wqp_loc]] \n" + " tbz %[tag], 63, done%= \n" + " sevl \n" + "rty%=: wfe \n" + " ldr %[tag], [%[tag_loc]] \n" + " ldr %[wqp], [%[wqp_loc]] \n" + " tbnz %[tag], 63, rty%= \n" + "done%=: str %[gw], [%[pong]] \n" + " dmb ld \n" + " prfm pldl1keep, [%[wqp]] \n" + : [tag] "=&r" (event.get_work0), + [wqp] "=&r" (get_work1) + : [tag_loc] "r" (ws->tag_op), + [wqp_loc] "r" (ws->wqp_op), + [gw] "r" (set_gw), + [pong] "r" (ws_pair->getwrk_op) + ); +#else + event.get_work0 = otx2_read64(ws->tag_op); + while ((BIT_ULL(63)) & event.get_work0) + event.get_work0 = otx2_read64(ws->tag_op); + get_work1 = otx2_read64(ws->wqp_op); + otx2_write64(set_gw, ws_pair->getwrk_op); + + rte_prefetch0((const void *)get_work1); +#endif + event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 | + (event.get_work0 & (0x3FFull << 36)) << 4 | + (event.get_work0 & 0xffffffff); + ws->cur_tt = event.sched_type; + ws->cur_grp = event.queue_id; + + ev->event = event.get_work0; + ev->u64 = get_work1; + + return !!get_work1; +} + +static __rte_always_inline void +otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr, + const uint32_t tag, const uint8_t new_tt, + const uint16_t grp) +{ + uint64_t add_work0; + + add_work0 = tag | ((uint64_t)(new_tt) << 32); + otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]); +} + +#endif From patchwork Fri Jun 28 18:23:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55624 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D213C1B9F8; Fri, 28 Jun 2019 20:25:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 4E15F1B9BE for ; Fri, 28 Jun 2019 20:24:43 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhoQ010889 for ; Fri, 28 Jun 2019 11:24:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=H2e8XHnNX3HsyRMXQCs/qfVOOfDoJJ9PAscdDpf9/fI=; b=tnXl9MK8qkSda6z475OiRS+TbB0hJqmhAJLYSE0jDybVSRVWdapi/jfjvMqUh9OvIe7g uHchUEu9AP1wGsulSDx8XTd7MrE458PuHOQYVyY8kJy2B5ljY8py0wwXFL/dNg9lExNk j7dzWaWCD9V0YE0Jt5xPfaNwDgOXJ3YU8h+UoSkny2lasG3VgDa+uCfcRTp+QVwBfcpE ArODt8NazfC1TrURRBL8FKlgVQGeEshrzxs3W5t/n6ZjwityKbMat9dgrCghJ2ctqSCC GepJjNB4/t8vXrS8kKVdiyakAEmrc7uLRYXZdM5ah+nnPbSQZSHC0qb97KthzW+gJS/R aw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agmk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:42 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:41 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:41 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 46E233F7040; Fri, 28 Jun 2019 11:24:40 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:30 +0530 Message-ID: <20190628182354.228-20-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 19/42] event/octeontx2: add worker dual GWS enqueue functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add dual workslot mode event enqueue functions. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- drivers/event/octeontx2/otx2_evdev.h | 9 ++ drivers/event/octeontx2/otx2_worker_dual.c | 135 +++++++++++++++++++++ 2 files changed, 144 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 72de9ace5..fd2a4c330 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -187,6 +187,7 @@ parse_kvargs_value(const char *key, const char *value, void *opaque) return 0; } +/* Single WS API's */ uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev); uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events); @@ -204,6 +205,14 @@ uint16_t otx2_ssogws_deq_timeout(void *port, struct rte_event *ev, uint16_t otx2_ssogws_deq_timeout_burst(void *port, struct rte_event ev[], uint16_t nb_events, uint64_t timeout_ticks); +/* Dual WS API's */ +uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev); +uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); +uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); +uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], + uint16_t nb_events); /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index f762436aa..661c78c23 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -4,3 +4,138 @@ #include "otx2_worker_dual.h" #include "otx2_worker.h" + +static __rte_noinline uint8_t +otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws, + const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint64_t event_ptr = ev->u64; + const uint16_t grp = ev->queue_id; + + if (ws->xaq_lmt <= *ws->fc_mem) + return 0; + + otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp); + + return 1; +} + +static __rte_always_inline void +otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws, + const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint8_t cur_tt = ws->cur_tt; + + /* 96XX model + * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED + * + * SSO_SYNC_ORDERED norm norm untag + * SSO_SYNC_ATOMIC norm norm untag + * SSO_SYNC_UNTAGGED norm norm NOOP + */ + if (new_tt == SSO_SYNC_UNTAGGED) { + if (cur_tt != SSO_SYNC_UNTAGGED) + otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws); + } else { + otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt); + } +} + +static __rte_always_inline void +otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws, + const struct rte_event *ev, const uint16_t grp) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + + otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_UPD_WQP_GRP1); + rte_smp_wmb(); + otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp); +} + +static __rte_always_inline void +otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws, + struct otx2_ssogws_state *vws, + const struct rte_event *ev) +{ + const uint8_t grp = ev->queue_id; + + /* Group hasn't changed, Use SWTAG to forward the event */ + if (vws->cur_grp == grp) { + otx2_ssogws_dual_fwd_swtag(vws, ev); + ws->swtag_req = 1; + } else { + /* + * Group has been changed for group based work pipelining, + * Use deschedule/add_work operation to transfer the event to + * new group/core + */ + otx2_ssogws_dual_fwd_group(vws, ev, grp); + } +} + +uint16_t __hot +otx2_ssogws_dual_enq(void *port, const struct rte_event *ev) +{ + struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; + + switch (ev->op) { + case RTE_EVENT_OP_NEW: + rte_smp_mb(); + return otx2_ssogws_dual_new_event(ws, ev); + case RTE_EVENT_OP_FORWARD: + otx2_ssogws_dual_forward_event(ws, vws, ev); + break; + case RTE_EVENT_OP_RELEASE: + otx2_ssogws_swtag_flush((struct otx2_ssogws *)vws); + break; + default: + return 0; + } + + return 1; +} + +uint16_t __hot +otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + RTE_SET_USED(nb_events); + return otx2_ssogws_dual_enq(port, ev); +} + +uint16_t __hot +otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + uint16_t i, rc = 1; + + rte_smp_mb(); + if (ws->xaq_lmt <= *ws->fc_mem) + return 0; + + for (i = 0; i < nb_events && rc; i++) + rc = otx2_ssogws_dual_new_event(ws, &ev[i]); + + return nb_events; +} + +uint16_t __hot +otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], + uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; + + RTE_SET_USED(nb_events); + otx2_ssogws_dual_forward_event(ws, vws, ev); + + return 1; +} From patchwork Fri Jun 28 18:23:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55625 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BCC41BA00; Fri, 28 Jun 2019 20:25:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 2FDE51B9B5 for ; Fri, 28 Jun 2019 20:24:46 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIMM3q012139 for ; Fri, 28 Jun 2019 11:24:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=EMh+ArKJvriNbSxEjljUENiWN+FoBf4J+eBt/JTzi0w=; b=k7LsMa54DMW5zkbV2DnDvW4M86P9/Safdu88COENMuChC6B8hAdCxSyoYZ5s08TkQYx+ LydfaUoGWlGxfoELCkxyD9UKAJJIfnsh9PZZG2PJ0ZtdarFln2WBodFTJILqjnkiiXSk Y6iTEZH6mq9wjfLb4FkbN+Y3wOMbPilEtanLaqGQL3/jKvwmwT2YDIAl7GkKrC/oZIx8 /6oFRD4nrmJyQVZXsMopfWL5RyZhSVcpv6A5f+XWKTsNcchY2X5RO3KAc4ks0j5a6dNT wsKRE78/AXXXyl3Mxo7AJQDDw0gBt/A47Ga7A7RP9+7vUuEuIjYHtW9UQ0LCUzc1kHG3 8A== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agmx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:45 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:44 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:44 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id D82543F7040; Fri, 28 Jun 2019 11:24:42 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:31 +0530 Message-ID: <20190628182354.228-21-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 20/42] event/octeontx2: add worker dual GWS dequeue functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add workder dual workslot mode dequeue functions. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.h | 9 +++ drivers/event/octeontx2/otx2_worker_dual.c | 66 ++++++++++++++++++++++ 2 files changed, 75 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index fd2a4c330..30b5d2c32 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -214,6 +214,15 @@ uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +uint16_t otx2_ssogws_dual_deq(void *port, struct rte_event *ev, + uint64_t timeout_ticks); +uint16_t otx2_ssogws_dual_deq_burst(void *port, struct rte_event ev[], + uint16_t nb_events, uint64_t timeout_ticks); +uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev, + uint64_t timeout_ticks); +uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[], + uint16_t nb_events, + uint64_t timeout_ticks); /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index 661c78c23..58fd588f6 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -139,3 +139,69 @@ otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +uint16_t __hot +otx2_ssogws_dual_deq(void *port, struct rte_event *ev, uint64_t timeout_ticks) +{ + struct otx2_ssogws_dual *ws = port; + uint8_t gw; + + RTE_SET_USED(timeout_ticks); + if (ws->swtag_req) { + otx2_ssogws_swtag_wait((struct otx2_ssogws *) + &ws->ws_state[!ws->vws]); + ws->swtag_req = 0; + return 1; + } + + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], + &ws->ws_state[!ws->vws], ev); + ws->vws = !ws->vws; + + return gw; +} + +uint16_t __hot +otx2_ssogws_dual_deq_burst(void *port, struct rte_event ev[], + uint16_t nb_events, uint64_t timeout_ticks) +{ + RTE_SET_USED(nb_events); + + return otx2_ssogws_dual_deq(port, ev, timeout_ticks); +} + +uint16_t __hot +otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev, + uint64_t timeout_ticks) +{ + struct otx2_ssogws_dual *ws = port; + uint64_t iter; + uint8_t gw; + + if (ws->swtag_req) { + otx2_ssogws_swtag_wait((struct otx2_ssogws *) + &ws->ws_state[!ws->vws]); + ws->swtag_req = 0; + return 1; + } + + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], + &ws->ws_state[!ws->vws], ev); + ws->vws = !ws->vws; + for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { + gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], + &ws->ws_state[!ws->vws], ev); + ws->vws = !ws->vws; + } + + return gw; +} + +uint16_t __hot +otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[], + uint16_t nb_events, uint64_t timeout_ticks) +{ + RTE_SET_USED(nb_events); + + return otx2_ssogws_dual_deq_timeout(port, ev, timeout_ticks); +} From patchwork Fri Jun 28 18:23:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55626 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 163B31BA46; Fri, 28 Jun 2019 20:25:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 9308E1B9C1 for ; Fri, 28 Jun 2019 20:24:48 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhoR010889; Fri, 28 Jun 2019 11:24:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=q+4fFQ/LFUwoUl9ZaL5pyb4suJMWuV5OyOa/6AwQmJ4=; b=yl3Q+yGZTJ6JMg4/2Z/RE78A5YAs7A9l7FR5XCCjGnQyKCx1nfHA6hJf7SkR222qM1DQ ABGHLuKMEco9xba/62cfLgxcKeeHeJkxJ+iCnDO0hXAbdxOphbONXLWS7czY1roK4BhT 7U5lwmLndYV9XpBwtICqEfcuG4O9XYv0j9EVn8NZT4xx/y6r6ZqyeVYb0gCsPomWRVek 8jXBpOCGe6nV+gWLW1Dl9Jb5JNVfuRYbW9EHlmg2UdjIBMq24g77AldoSCg5nid+9mEi bSQfWomGeog2dQMMzU6LbrzvIFTAyLwdkmWGvN4nSHPlzR6kE9/bUmVn9idscalhLeTP Ww== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agn1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:24:47 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:46 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 035F03F7040; Fri, 28 Jun 2019 11:24:44 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:32 +0530 Message-ID: <20190628182354.228-22-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 21/42] event/octeontx2: add devargs to force legacy mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Octeontx2 SSO by default is set to use dual workslot mode. Add devargs option to force legacy mode i.e. single workslot mode. Example: --dev "0002:0e:00.0,single_ws=1" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 8 ++++++++ drivers/event/octeontx2/otx2_evdev.c | 8 +++++++- drivers/event/octeontx2/otx2_evdev.h | 11 ++++++++++- 3 files changed, 25 insertions(+), 2 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index f83cf1e9d..c864f39f9 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -58,6 +58,14 @@ Runtime Config Options --dev "0002:0e:00.0,xae_cnt=16384" +- ``Force legacy mode`` + + The ``single_ws`` devargs parameter is introduced to force legacy mode i.e + single workslot mode in SSO and disable the default dual workslot mode. + For example:: + + --dev "0002:0e:00.0,single_ws=1" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 16d5e7dfa..5dc39f029 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -911,11 +911,13 @@ static struct rte_eventdev_ops otx2_sso_ops = { }; #define OTX2_SSO_XAE_CNT "xae_cnt" +#define OTX2_SSO_SINGLE_WS "single_ws" static void sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) { struct rte_kvargs *kvlist; + uint8_t single_ws = 0; if (devargs == NULL) return; @@ -925,7 +927,10 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value, &dev->xae_cnt); + rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag, + &single_ws); + dev->dual_ws = !single_ws; rte_kvargs_free(kvlist); } @@ -1075,4 +1080,5 @@ otx2_sso_fini(struct rte_eventdev *event_dev) RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso); RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map); RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); -RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "="); +RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=" + OTX2_SSO_SINGLE_WS "=1"); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 30b5d2c32..8e614b109 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -121,8 +121,8 @@ struct otx2_sso_evdev { uint64_t nb_xaq_cfg; rte_iova_t fc_iova; struct rte_mempool *xaq_pool; - uint8_t dual_ws; /* Dev args */ + uint8_t dual_ws; uint32_t xae_cnt; /* HW const */ uint32_t xae_waes; @@ -178,6 +178,15 @@ sso_pmd_priv(const struct rte_eventdev *event_dev) return event_dev->data->dev_private; } +static inline int +parse_kvargs_flag(const char *key, const char *value, void *opaque) +{ + RTE_SET_USED(key); + + *(uint8_t *)opaque = !!atoi(value); + return 0; +} + static inline int parse_kvargs_value(const char *key, const char *value, void *opaque) { From patchwork Fri Jun 28 18:23:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55627 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 920911BA5B; Fri, 28 Jun 2019 20:25:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D58021B9C6 for ; Fri, 28 Jun 2019 20:24:50 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhtv010886; Fri, 28 Jun 2019 11:24:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=V6OvE3J3d9JyDpo9WEfe640OfE+V/xtZkH1QSyzxdXI=; b=IfhWACp/X2qEfZKZ17al3jJdBK0wq5PnnWK3T2KZtZhi5kc/+kt8XMsuJqlUZbrTmEyY TQZ4AAusay9Q98S/kiT+na5TKqS/FO8iULEOX1HGuS8VcH8RUtheoNMUYtgYn/+bGdcr m87EXElqRlxWIP5/OwaN1E1rnrEVe+5er1xeA+sEg9yyFsBLe4bpOhpTwEvgxXnTlBF3 B0gWLGaCQfnJtD61KySKCBfVPa+++HhkVx5ipsjIG/+Y4yLE53p6K6X5oBMQJ3ipP5vW J2RRxGQq1hxuKBQw20g27W2LAy1GsOdMkKXxdhtANu4gKyztxmtDTwp/iIANmpFdfdi2 1Q== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agnb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:24:50 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:48 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:48 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 934CB3F7040; Fri, 28 Jun 2019 11:24:47 -0700 (PDT) From: To: , Pavan Nikhilesh , "Anatoly Burakov" CC: Date: Fri, 28 Jun 2019 23:53:33 +0530 Message-ID: <20190628182354.228-23-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 22/42] event/octeontx2: add device start function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add eventdev start function along with few cleanup API's to maintain sanity. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 127 +++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 6 ++ drivers/event/octeontx2/otx2_worker.c | 74 +++++++++++++++ 3 files changed, 206 insertions(+), 1 deletion(-) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 5dc39f029..d6ddee1cd 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -38,6 +38,41 @@ sso_get_msix_offsets(const struct rte_eventdev *event_dev) return rc; } +void +sso_fastpath_fns_set(struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + + event_dev->enqueue = otx2_ssogws_enq; + event_dev->enqueue_burst = otx2_ssogws_enq_burst; + event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst; + event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst; + + event_dev->dequeue = otx2_ssogws_deq; + event_dev->dequeue_burst = otx2_ssogws_deq_burst; + if (dev->is_timeout_deq) { + event_dev->dequeue = otx2_ssogws_deq_timeout; + event_dev->dequeue_burst = otx2_ssogws_deq_timeout_burst; + } + + if (dev->dual_ws) { + event_dev->enqueue = otx2_ssogws_dual_enq; + event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst; + event_dev->enqueue_new_burst = + otx2_ssogws_dual_enq_new_burst; + event_dev->enqueue_forward_burst = + otx2_ssogws_dual_enq_fwd_burst; + event_dev->dequeue = otx2_ssogws_dual_deq; + event_dev->dequeue_burst = otx2_ssogws_dual_deq_burst; + if (dev->is_timeout_deq) { + event_dev->dequeue = otx2_ssogws_dual_deq_timeout; + event_dev->dequeue_burst = + otx2_ssogws_dual_deq_timeout_burst; + } + } + rte_mb(); +} + static void otx2_sso_info_get(struct rte_eventdev *event_dev, struct rte_event_dev_info *dev_info) @@ -889,6 +924,93 @@ otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f) } } +static void +otx2_handle_event(void *arg, struct rte_event event) +{ + struct rte_eventdev *event_dev = arg; + + if (event_dev->dev_ops->dev_stop_flush != NULL) + event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id, + event, event_dev->data->dev_stop_flush_arg); +} + +static void +sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint16_t i; + + for (i = 0; i < dev->nb_event_ports; i++) { + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws; + + ws = event_dev->data->ports[i]; + ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]); + ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]); + ws->swtag_req = 0; + ws->vws = 0; + ws->ws_state[0].cur_grp = 0; + ws->ws_state[0].cur_tt = SSO_SYNC_EMPTY; + ws->ws_state[1].cur_grp = 0; + ws->ws_state[1].cur_tt = SSO_SYNC_EMPTY; + } else { + struct otx2_ssogws *ws; + + ws = event_dev->data->ports[i]; + ssogws_reset(ws); + ws->swtag_req = 0; + ws->cur_grp = 0; + ws->cur_tt = SSO_SYNC_EMPTY; + } + } + + rte_mb(); + if (dev->dual_ws) { + struct otx2_ssogws_dual *ws = event_dev->data->ports[0]; + struct otx2_ssogws temp_ws; + + memcpy(&temp_ws, &ws->ws_state[0], + sizeof(struct otx2_ssogws_state)); + for (i = 0; i < dev->nb_event_queues; i++) { + /* Consume all the events through HWS0 */ + ssogws_flush_events(&temp_ws, i, ws->grps_base[i], + otx2_handle_event, event_dev); + /* Enable/Disable SSO GGRP */ + otx2_write64(enable, ws->grps_base[i] + + SSO_LF_GGRP_QCTL); + } + ws->ws_state[0].cur_grp = 0; + ws->ws_state[0].cur_tt = SSO_SYNC_EMPTY; + } else { + struct otx2_ssogws *ws = event_dev->data->ports[0]; + + for (i = 0; i < dev->nb_event_queues; i++) { + /* Consume all the events through HWS0 */ + ssogws_flush_events(ws, i, ws->grps_base[i], + otx2_handle_event, event_dev); + /* Enable/Disable SSO GGRP */ + otx2_write64(enable, ws->grps_base[i] + + SSO_LF_GGRP_QCTL); + } + ws->cur_grp = 0; + ws->cur_tt = SSO_SYNC_EMPTY; + } + + /* reset SSO GWS cache */ + otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox); + otx2_mbox_process(dev->mbox); +} + +static int +otx2_sso_start(struct rte_eventdev *event_dev) +{ + sso_func_trace(); + sso_cleanup(event_dev, 1); + sso_fastpath_fns_set(event_dev); + + return 0; +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, @@ -908,6 +1030,7 @@ static struct rte_eventdev_ops otx2_sso_ops = { .xstats_get_names = otx2_sso_xstats_get_names, .dump = otx2_sso_dump, + .dev_start = otx2_sso_start, }; #define OTX2_SSO_XAE_CNT "xae_cnt" @@ -975,8 +1098,10 @@ otx2_sso_init(struct rte_eventdev *event_dev) event_dev->dev_ops = &otx2_sso_ops; /* For secondary processes, the primary has done all the work */ - if (rte_eal_process_type() != RTE_PROC_PRIMARY) + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + sso_fastpath_fns_set(event_dev); return 0; + } dev = sso_pmd_priv(event_dev); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 8e614b109..4428abcfa 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -232,6 +232,12 @@ uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev, uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[], uint16_t nb_events, uint64_t timeout_ticks); +void sso_fastpath_fns_set(struct rte_eventdev *event_dev); +/* Clean up API's */ +typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); +void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, + uintptr_t base, otx2_handle_event_t fn, void *arg); +void ssogws_reset(struct otx2_ssogws *ws); /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c index edc574673..7a6d4cad2 100644 --- a/drivers/event/octeontx2/otx2_worker.c +++ b/drivers/event/octeontx2/otx2_worker.c @@ -194,3 +194,77 @@ otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +void +ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base, + otx2_handle_event_t fn, void *arg) +{ + uint64_t cq_ds_cnt = 1; + uint64_t aq_cnt = 1; + uint64_t ds_cnt = 1; + struct rte_event ev; + uint64_t enable; + uint64_t val; + + enable = otx2_read64(base + SSO_LF_GGRP_QCTL); + if (!enable) + return; + + val = queue_id; /* GGRP ID */ + val |= BIT_ULL(18); /* Grouped */ + val |= BIT_ULL(16); /* WAIT */ + + aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT); + ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT); + cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT); + cq_ds_cnt &= 0x3FFF3FFF0000; + + while (aq_cnt || cq_ds_cnt || ds_cnt) { + otx2_write64(val, ws->getwrk_op); + otx2_ssogws_get_work_empty(ws, &ev); + if (fn != NULL && ev.u64 != 0) + fn(arg, ev); + if (ev.sched_type != SSO_TT_EMPTY) + otx2_ssogws_swtag_flush(ws); + rte_mb(); + aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT); + ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT); + cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT); + /* Extract cq and ds count */ + cq_ds_cnt &= 0x3FFF3FFF0000; + } + + otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) + + SSOW_LF_GWS_OP_GWC_INVAL); + rte_mb(); +} + +void +ssogws_reset(struct otx2_ssogws *ws) +{ + uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op); + uint64_t pend_state; + uint8_t pend_tt; + uint64_t tag; + + /* Wait till getwork/swtp/waitw/desched completes. */ + do { + pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE); + rte_mb(); + } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58))); + + tag = otx2_read64(base + SSOW_LF_GWS_TAG); + pend_tt = (tag >> 32) & 0x3; + if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */ + if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED) + otx2_ssogws_swtag_untag(ws); + otx2_ssogws_desched(ws); + } + rte_mb(); + + /* Wait for desched to complete. */ + do { + pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE); + rte_mb(); + } while (pend_state & BIT_ULL(58)); +} From patchwork Fri Jun 28 18:23:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55628 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 601141BA92; Fri, 28 Jun 2019 20:25:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 11D141B9BA for ; Fri, 28 Jun 2019 20:24:54 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIOlWk013964; Fri, 28 Jun 2019 11:24:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=TfsKXRF4z3e2QW65RNrnGHghJhCYLOBzrk/dNKCX3Hw=; b=b5+hZUpxAgKwUlCvWZT1XIhlk+ewibMyJ9EFC/LW1irKT7sBPHDovhOewscPrhgp4mYr Ql7ZFOCscPsnu49Zy29hzBe6aJo3K+IlWyRFLuaSlmp3XJcDcFxFdpd4AfrgLyZLcbTq WEe2G/kEWhdKZtILV+iU1k3LMcufUUkwNXbdgue1++1ziP/5O8mM4bTQFo5OnFlfNhNF LhhaxOYFb0SSnyCKAL38SmW7VeXmG7+2MSrMue1UTzkR/MbAF32RH7XsWibrdFZ9KClQ x2t6tCR7d30bSxskVbgepSGDLrTrxFpNNyfuyyncuTBIjFL72ARRM8+CWWfSmMGqXF3W tg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191jr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:24:54 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:51 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:51 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id ECA5A3F7040; Fri, 28 Jun 2019 11:24:49 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:34 +0530 Message-ID: <20190628182354.228-24-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 23/42] event/octeontx2: add devargs to control SSO GGRP QoS X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight events. By default the buffers are assigned to the SSO GGRPs to satisfy minimum HW requirements. SSO is free to assign the remaining buffers to GGRPs based on a preconfigured threshold. We can control the QoS of SSO GGRP by modifying the above mentioned thresholds. GGRPs that have higher importance can be assigned higher thresholds than the rest. Example: --dev "0002:0e:00.0,qos=[1-50-50-50]" // [Qx-XAQ-TAQ-IAQ] Qx -> Event queue Aka SSO GGRP. XAQ -> DRAM In-flights. TAQ & IAQ -> SRAM In-flights. The values need to be expressed in terms of percentages, 0 represents default. Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- doc/guides/eventdevs/octeontx2.rst | 15 ++++ drivers/event/octeontx2/otx2_evdev.c | 104 ++++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 9 +++ 3 files changed, 127 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index c864f39f9..9b235f236 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -66,6 +66,21 @@ Runtime Config Options --dev "0002:0e:00.0,single_ws=1" +- ``Event Group QoS support`` + + SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight + events. By default the buffers are assigned to the SSO GGRPs to + satisfy minimum HW requirements. SSO is free to assign the remaining + buffers to GGRPs based on a preconfigured threshold. + We can control the QoS of SSO GGRP by modifying the above mentioned + thresholds. GGRPs that have higher importance can be assigned higher + thresholds than the rest. The dictionary format is as follows + [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents + default. + For example:: + + --dev "0002:0e:00.0,qos=[1-50-50-50]" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index d6ddee1cd..786772ba9 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -934,6 +934,34 @@ otx2_handle_event(void *arg, struct rte_event event) event, event_dev->data->dev_stop_flush_arg); } +static void +sso_qos_cfg(struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct sso_grp_qos_cfg *req; + uint16_t i; + + for (i = 0; i < dev->qos_queue_cnt; i++) { + uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt; + uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt; + uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt; + + if (dev->qos_parse_data[i].queue >= dev->nb_event_queues) + continue; + + req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox); + req->xaq_limit = (dev->nb_xaq_cfg * + (xaq_prcnt ? xaq_prcnt : 100)) / 100; + req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK * + (iaq_prcnt ? iaq_prcnt : 100)) / 100; + req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK * + (taq_prcnt ? taq_prcnt : 100)) / 100; + } + + if (dev->qos_queue_cnt) + otx2_mbox_process(dev->mbox); +} + static void sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable) { @@ -1005,6 +1033,7 @@ static int otx2_sso_start(struct rte_eventdev *event_dev) { sso_func_trace(); + sso_qos_cfg(event_dev); sso_cleanup(event_dev, 1); sso_fastpath_fns_set(event_dev); @@ -1035,6 +1064,76 @@ static struct rte_eventdev_ops otx2_sso_ops = { #define OTX2_SSO_XAE_CNT "xae_cnt" #define OTX2_SSO_SINGLE_WS "single_ws" +#define OTX2_SSO_GGRP_QOS "qos" + +static void +parse_queue_param(char *value, void *opaque) +{ + struct otx2_sso_qos queue_qos = {0}; + uint8_t *val = (uint8_t *)&queue_qos; + struct otx2_sso_evdev *dev = opaque; + char *tok = strtok(value, "-"); + + if (!strlen(value)) + return; + + while (tok != NULL) { + *val = atoi(tok); + tok = strtok(NULL, "-"); + val++; + } + + if (val != (&queue_qos.iaq_prcnt + 1)) { + otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]"); + return; + } + + dev->qos_queue_cnt++; + dev->qos_parse_data = rte_realloc(dev->qos_parse_data, + sizeof(struct otx2_sso_qos) * + dev->qos_queue_cnt, 0); + dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos; +} + +static void +parse_qos_list(const char *value, void *opaque) +{ + char *s = strdup(value); + char *start = NULL; + char *end = NULL; + char *f = s; + + while (*s) { + if (*s == '[') + start = s; + else if (*s == ']') + end = s; + + if (start < end && *start) { + *end = 0; + parse_queue_param(start + 1, opaque); + s = end; + start = end; + } + s++; + } + + free(f); +} + +static int +parse_sso_kvargs_dict(const char *key, const char *value, void *opaque) +{ + RTE_SET_USED(key); + + /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ',' + * isn't allowed. Everything is expressed in percentages, 0 represents + * default. + */ + parse_qos_list(value, opaque); + + return 0; +} static void sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) @@ -1052,6 +1151,8 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) &dev->xae_cnt); rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag, &single_ws); + rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict, + dev); dev->dual_ws = !single_ws; rte_kvargs_free(kvlist); @@ -1206,4 +1307,5 @@ RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso); RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map); RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=" - OTX2_SSO_SINGLE_WS "=1"); + OTX2_SSO_SINGLE_WS "=1" + OTX2_SSO_GGRP_QOS "="); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 4428abcfa..2aa742184 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -104,6 +104,13 @@ enum { SSO_SYNC_EMPTY }; +struct otx2_sso_qos { + uint8_t queue; + uint8_t xaq_prcnt; + uint8_t taq_prcnt; + uint8_t iaq_prcnt; +}; + struct otx2_sso_evdev { OTX2_DEV; /* Base class */ uint8_t max_event_queues; @@ -124,6 +131,8 @@ struct otx2_sso_evdev { /* Dev args */ uint8_t dual_ws; uint32_t xae_cnt; + uint8_t qos_queue_cnt; + struct otx2_sso_qos *qos_parse_data; /* HW const */ uint32_t xae_waes; uint32_t xaq_buf_size; From patchwork Fri Jun 28 18:23:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55629 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC0C91BA9F; Fri, 28 Jun 2019 20:25:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1F3B41B9DF for ; Fri, 28 Jun 2019 20:24:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIOlWl013964 for ; Fri, 28 Jun 2019 11:24:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=tJXtAnOee2rBnRH413mRu5F9Xg1NO0Bd5DdQs6GZstU=; b=MliKRVHIXUMwIKRPH44EnSl2rhue1PV6EkNsSFlws/vAMy7rIlOzl21g1v2uOpckRztz DPHvuCRlfXrFLzMWqJHGanRCn+5U4Or5eM/mcTy9SV/GbylOfp5DHEMtnVzV8Wde9pmV 5fS1wgJJRvimLRe0mswuYn1xRcs+HgG+8jaIiVVmU7YMrwyoGoYKFOUI7SETFfxeQdp+ jZ9anCrzp4ezYJD7KjQxAyXg6tE273BIFwexLqBUx/CF0+RZeIin72E+ngmoyWx4t0Mb Hgzd5oFyGdr2J5JE3p+KJqC3sJtG0lSp3sCK2TTo6IXFNl8bWQLH/adxUbJBzIwscVWp ew== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191jx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:24:56 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:53 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:53 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 9F7FA3F7040; Fri, 28 Jun 2019 11:24:52 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:35 +0530 Message-ID: <20190628182354.228-25-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 24/42] event/octeontx2: add device stop and close functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event device stop and close callback functions. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 39 ++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 786772ba9..5004fe2de 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1040,6 +1040,43 @@ otx2_sso_start(struct rte_eventdev *event_dev) return 0; } +static void +otx2_sso_stop(struct rte_eventdev *event_dev) +{ + sso_func_trace(); + sso_cleanup(event_dev, 0); + rte_mb(); +} + +static int +otx2_sso_close(struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint16_t i; + + if (!dev->configured) + return 0; + + sso_unregister_irqs(event_dev); + + for (i = 0; i < dev->nb_event_queues; i++) + all_queues[i] = i; + + for (i = 0; i < dev->nb_event_ports; i++) + otx2_sso_port_unlink(event_dev, event_dev->data->ports[i], + all_queues, dev->nb_event_queues); + + sso_lf_teardown(dev, SSO_LF_GGRP); + sso_lf_teardown(dev, SSO_LF_GWS); + dev->nb_event_ports = 0; + dev->nb_event_queues = 0; + rte_mempool_free(dev->xaq_pool); + rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME)); + + return 0; +} + /* Initialize and register event driver with DPDK Application */ static struct rte_eventdev_ops otx2_sso_ops = { .dev_infos_get = otx2_sso_info_get, @@ -1060,6 +1097,8 @@ static struct rte_eventdev_ops otx2_sso_ops = { .dump = otx2_sso_dump, .dev_start = otx2_sso_start, + .dev_stop = otx2_sso_stop, + .dev_close = otx2_sso_close, }; #define OTX2_SSO_XAE_CNT "xae_cnt" From patchwork Fri Jun 28 18:23:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55630 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8CCD61BA83; Fri, 28 Jun 2019 20:25:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 7A63A1B9BE for ; Fri, 28 Jun 2019 20:25:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIOjrW013950; Fri, 28 Jun 2019 11:24:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nkg2/DMvNQ5bhZLc21lKpA4MOuvRwkiFtmCVtiHiJs4=; b=jhYJhVdvfs9emXDXJ0F7qTpETzr2lssNiX7yo8Ty/igLuTmQ91woZtmQWz36wNAs2uAh yRSsBfIZC7sA77sGmr3It3MHyDNXIDad8or6ksgVqsVtSlPgiOhIknb4eR/CCH1shQB2 hpQgJzmsskkazkVPaXdwyN4WEwML62L879Hm9Q1QJGkSjnbQTfL6mRxHpGSmjnfAQ1mN NC6norntX16lz5cfpDHrsTqVmeRoj9AGtoZ/yGWO8n6LdnvZeGsQvDmx4o1NT4s1xzvY LPCtMYgWpgyLgzgSquuH/+bsIDO4PpHbKpWM5DydpNjFqVujmudkjHWdNTsnS+u6rgAy eg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191k5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:24:59 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:56 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:56 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id BE3D53F7040; Fri, 28 Jun 2019 11:24:54 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:36 +0530 Message-ID: <20190628182354.228-26-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 25/42] event/octeontx2: add SSO selftest X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add selftest to verify sanity of SSO. Can be run by passing devargs to SSO PF as follows: --dev "0002:0e:00.0,selftest=1" Signed-off-by: Pavan Nikhilesh Signed-off-by: Jerin Jacob --- app/test/test_eventdev.c | 8 + doc/guides/eventdevs/octeontx2.rst | 9 + drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 1 + drivers/event/octeontx2/otx2_evdev.c | 11 +- drivers/event/octeontx2/otx2_evdev.h | 3 + drivers/event/octeontx2/otx2_evdev_selftest.c | 1511 +++++++++++++++++ 7 files changed, 1543 insertions(+), 1 deletion(-) create mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index c745e997e..783140dfe 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1014,7 +1014,15 @@ test_eventdev_selftest_octeontx(void) return test_eventdev_selftest_impl("event_octeontx", ""); } +static int +test_eventdev_selftest_octeontx2(void) +{ + return test_eventdev_selftest_impl("otx2_eventdev", ""); +} + REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common); REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw); REGISTER_TEST_COMMAND(eventdev_selftest_octeontx, test_eventdev_selftest_octeontx); +REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2, + test_eventdev_selftest_octeontx2); diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index 9b235f236..562a83d07 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -81,6 +81,15 @@ Runtime Config Options --dev "0002:0e:00.0,qos=[1-50-50-50]" +- ``Selftest`` + + The functionality of OCTEON TX2 eventdev can be verified using this option, + various unit and functional tests are run to verify the sanity. + The tests are run once the vdev creation is successfully complete. + For example:: + + --dev "0002:0e:00.0,selftest=1" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index dfecda599..d6cffc1f6 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -33,6 +33,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -lrte_kvargs diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index c2a5f3e3d..470564b08 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -6,6 +6,7 @@ sources = files('otx2_worker.c', 'otx2_worker_dual.c', 'otx2_evdev.c', 'otx2_evdev_irq.c', + 'otx2_evdev_selftest.c', ) allow_experimental_apis = true diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 5004fe2de..c5a150954 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1099,11 +1099,13 @@ static struct rte_eventdev_ops otx2_sso_ops = { .dev_start = otx2_sso_start, .dev_stop = otx2_sso_stop, .dev_close = otx2_sso_close, + .dev_selftest = otx2_sso_selftest, }; #define OTX2_SSO_XAE_CNT "xae_cnt" #define OTX2_SSO_SINGLE_WS "single_ws" #define OTX2_SSO_GGRP_QOS "qos" +#define OTX2_SSO_SELFTEST "selftest" static void parse_queue_param(char *value, void *opaque) @@ -1186,6 +1188,8 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) if (kvlist == NULL) return; + rte_kvargs_process(kvlist, OTX2_SSO_SELFTEST, &parse_kvargs_flag, + &dev->selftest); rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value, &dev->xae_cnt); rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag, @@ -1301,6 +1305,10 @@ otx2_sso_init(struct rte_eventdev *event_dev) otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d", event_dev->data->name, dev->max_event_queues, dev->max_event_ports); + if (dev->selftest) { + event_dev->dev->driver = &pci_sso.driver; + event_dev->dev_ops->dev_selftest(); + } return 0; @@ -1347,4 +1355,5 @@ RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map); RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=" OTX2_SSO_SINGLE_WS "=1" - OTX2_SSO_GGRP_QOS "="); + OTX2_SSO_GGRP_QOS "=" + OTX2_SSO_SELFTEST "=1"); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 2aa742184..fc8dde416 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -130,6 +130,7 @@ struct otx2_sso_evdev { struct rte_mempool *xaq_pool; /* Dev args */ uint8_t dual_ws; + uint8_t selftest; uint32_t xae_cnt; uint8_t qos_queue_cnt; struct otx2_sso_qos *qos_parse_data; @@ -247,6 +248,8 @@ typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base, otx2_handle_event_t fn, void *arg); void ssogws_reset(struct otx2_ssogws *ws); +/* Selftest */ +int otx2_sso_selftest(void); /* Init and Fini API's */ int otx2_sso_init(struct rte_eventdev *event_dev); int otx2_sso_fini(struct rte_eventdev *event_dev); diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c new file mode 100644 index 000000000..8440a50aa --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_selftest.c @@ -0,0 +1,1511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "otx2_evdev.h" + +#define NUM_PACKETS (1024) +#define MAX_EVENTS (1024) + +#define OCTEONTX2_TEST_RUN(setup, teardown, test) \ + octeontx_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + +static int evdev; +static struct rte_mempool *eventdev_test_mempool; + +struct event_attr { + uint32_t flow_id; + uint8_t event_type; + uint8_t sub_event_type; + uint8_t sched_type; + uint8_t queue; + uint8_t port; +}; + +static uint32_t seqn_list_index; +static int seqn_list[NUM_PACKETS]; + +static inline void +seqn_list_init(void) +{ + RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS); + memset(seqn_list, 0, sizeof(seqn_list)); + seqn_list_index = 0; +} + +static inline int +seqn_list_update(int val) +{ + if (seqn_list_index >= NUM_PACKETS) + return -1; + + seqn_list[seqn_list_index++] = val; + rte_smp_wmb(); + return 0; +} + +static inline int +seqn_list_check(int limit) +{ + int i; + + for (i = 0; i < limit; i++) { + if (seqn_list[i] != i) { + otx2_err("Seqn mismatch %d %d", seqn_list[i], i); + return -1; + } + } + return 0; +} + +struct test_core_param { + rte_atomic32_t *total_events; + uint64_t dequeue_tmo_ticks; + uint8_t port; + uint8_t sched_type; +}; + +static int +testsuite_setup(void) +{ + const char *eventdev_name = "event_octeontx2"; + + evdev = rte_event_dev_get_dev_id(eventdev_name); + if (evdev < 0) { + otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name); + return -1; + } + return 0; +} + +static void +testsuite_teardown(void) +{ + rte_event_dev_close(evdev); +} + +static inline void +devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf, + struct rte_event_dev_info *info) +{ + memset(dev_conf, 0, sizeof(struct rte_event_dev_config)); + dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns; + dev_conf->nb_event_ports = info->max_event_ports; + dev_conf->nb_event_queues = info->max_event_queues; + dev_conf->nb_event_queue_flows = info->max_event_queue_flows; + dev_conf->nb_event_port_dequeue_depth = + info->max_event_port_dequeue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_events_limit = + info->max_num_events; +} + +enum { + TEST_EVENTDEV_SETUP_DEFAULT, + TEST_EVENTDEV_SETUP_PRIORITY, + TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT, +}; + +static inline int +_eventdev_setup(int mode) +{ + const char *pool_name = "evdev_octeontx_test_pool"; + struct rte_event_dev_config dev_conf; + struct rte_event_dev_info info; + int i, ret; + + /* Create and destrory pool for each test case to make it standalone */ + eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS, + 0, 0, 512, + rte_socket_id()); + if (!eventdev_test_mempool) { + otx2_err("ERROR creating mempool"); + return -1; + } + + ret = rte_event_dev_info_get(evdev, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + + devconf_set_default_sane_values(&dev_conf, &info); + if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) + dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; + + ret = rte_event_dev_configure(evdev, &dev_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + + uint32_t queue_count; + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + + if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { + if (queue_count > 8) + queue_count = 8; + + /* Configure event queues(0 to n) with + * RTE_EVENT_DEV_PRIORITY_HIGHEST to + * RTE_EVENT_DEV_PRIORITY_LOWEST + */ + uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) / + queue_count; + for (i = 0; i < (int)queue_count; i++) { + struct rte_event_queue_conf queue_conf; + + ret = rte_event_queue_default_conf_get(evdev, i, + &queue_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", + i); + queue_conf.priority = i * step; + ret = rte_event_queue_setup(evdev, i, &queue_conf); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); + } + + } else { + /* Configure event queues with default priority */ + for (i = 0; i < (int)queue_count; i++) { + ret = rte_event_queue_setup(evdev, i, NULL); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); + } + } + /* Configure event ports */ + uint32_t port_count; + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), + "Port count get failed"); + for (i = 0; i < (int)port_count; i++) { + ret = rte_event_port_setup(evdev, i, NULL); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + ret = rte_event_port_link(evdev, i, NULL, NULL, 0); + RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", + i); + } + + ret = rte_event_dev_start(evdev); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + + return 0; +} + +static inline int +eventdev_setup(void) +{ + return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT); +} + +static inline int +eventdev_setup_priority(void) +{ + return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY); +} + +static inline int +eventdev_setup_dequeue_timeout(void) +{ + return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT); +} + +static inline void +eventdev_teardown(void) +{ + rte_event_dev_stop(evdev); + rte_mempool_free(eventdev_test_mempool); +} + +static inline void +update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev, + uint32_t flow_id, uint8_t event_type, + uint8_t sub_event_type, uint8_t sched_type, + uint8_t queue, uint8_t port) +{ + struct event_attr *attr; + + /* Store the event attributes in mbuf for future reference */ + attr = rte_pktmbuf_mtod(m, struct event_attr *); + attr->flow_id = flow_id; + attr->event_type = event_type; + attr->sub_event_type = sub_event_type; + attr->sched_type = sched_type; + attr->queue = queue; + attr->port = port; + + ev->flow_id = flow_id; + ev->sub_event_type = sub_event_type; + ev->event_type = event_type; + /* Inject the new event */ + ev->op = RTE_EVENT_OP_NEW; + ev->sched_type = sched_type; + ev->queue_id = queue; + ev->mbuf = m; +} + +static inline int +inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, + uint8_t sched_type, uint8_t queue, uint8_t port, + unsigned int events) +{ + struct rte_mbuf *m; + unsigned int i; + + for (i = 0; i < events; i++) { + struct rte_event ev = {.event = 0, .u64 = 0}; + + m = rte_pktmbuf_alloc(eventdev_test_mempool); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + + m->seqn = i; + update_event_and_validation_attr(m, &ev, flow_id, event_type, + sub_event_type, sched_type, + queue, port); + rte_event_enqueue_burst(evdev, port, &ev, 1); + } + return 0; +} + +static inline int +check_excess_events(uint8_t port) +{ + uint16_t valid_event; + struct rte_event ev; + int i; + + /* Check for excess events, try for a few times and exit */ + for (i = 0; i < 32; i++) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + + RTE_TEST_ASSERT_SUCCESS(valid_event, + "Unexpected valid event=%d", + ev.mbuf->seqn); + } + return 0; +} + +static inline int +generate_random_events(const unsigned int total_events) +{ + struct rte_event_dev_info info; + uint32_t queue_count; + unsigned int i; + int ret; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + + ret = rte_event_dev_info_get(evdev, &info); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + for (i = 0; i < total_events; i++) { + ret = inject_events( + rte_rand() % info.max_event_queue_flows /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + rte_rand() % 256 /* sub_event_type */, + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), + rte_rand() % queue_count /* queue */, + 0 /* port */, + 1 /* events */); + if (ret) + return -1; + } + return ret; +} + + +static inline int +validate_event(struct rte_event *ev) +{ + struct event_attr *attr; + + attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); + RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + "flow_id mismatch enq=%d deq =%d", + attr->flow_id, ev->flow_id); + RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + "event_type mismatch enq=%d deq =%d", + attr->event_type, ev->event_type); + RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + "sub_event_type mismatch enq=%d deq =%d", + attr->sub_event_type, ev->sub_event_type); + RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + "sched_type mismatch enq=%d deq =%d", + attr->sched_type, ev->sched_type); + RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + "queue mismatch enq=%d deq =%d", + attr->queue, ev->queue_id); + return 0; +} + +typedef int (*validate_event_cb)(uint32_t index, uint8_t port, + struct rte_event *ev); + +static inline int +consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) +{ + uint32_t events = 0, forward_progress_cnt = 0, index = 0; + uint16_t valid_event; + struct rte_event ev; + int ret; + + while (1) { + if (++forward_progress_cnt > UINT16_MAX) { + otx2_err("Detected deadlock"); + return -1; + } + + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + forward_progress_cnt = 0; + ret = validate_event(&ev); + if (ret) + return -1; + + if (fn != NULL) { + ret = fn(index, port, &ev); + RTE_TEST_ASSERT_SUCCESS(ret, + "Failed to validate test specific event"); + } + + ++index; + + rte_pktmbuf_free(ev.mbuf); + if (++events >= total_events) + break; + } + + return check_excess_events(port); +} + +static int +validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) +{ + RTE_SET_USED(port); + RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", + index, ev->mbuf->seqn); + return 0; +} + +static inline int +test_simple_enqdeq(uint8_t sched_type) +{ + int ret; + + ret = inject_events(0 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + 0 /* sub_event_type */, + sched_type, + 0 /* queue */, + 0 /* port */, + MAX_EVENTS); + if (ret) + return -1; + + return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); +} + +static int +test_simple_enqdeq_ordered(void) +{ + return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED); +} + +static int +test_simple_enqdeq_atomic(void) +{ + return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_simple_enqdeq_parallel(void) +{ + return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL); +} + +/* + * Generate a prescribed number of events and spread them across available + * queues. On dequeue, using single event port(port 0) verify the enqueued + * event attributes + */ +static int +test_multi_queue_enq_single_port_deq(void) +{ + int ret; + + ret = generate_random_events(MAX_EVENTS); + if (ret) + return -1; + + return consume_events(0 /* port */, MAX_EVENTS, NULL); +} + +/* + * Inject 0..MAX_EVENTS events over 0..queue_count with modulus + * operation + * + * For example, Inject 32 events over 0..7 queues + * enqueue events 0, 8, 16, 24 in queue 0 + * enqueue events 1, 9, 17, 25 in queue 1 + * .. + * .. + * enqueue events 7, 15, 23, 31 in queue 7 + * + * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31 + * order from queue0(highest priority) to queue7(lowest_priority) + */ +static int +validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) +{ + uint32_t queue_count; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + if (queue_count > 8) + queue_count = 8; + uint32_t range = MAX_EVENTS / queue_count; + uint32_t expected_val = (index % range) * queue_count; + + expected_val += ev->queue_id; + RTE_SET_USED(port); + RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, + "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", + ev->mbuf->seqn, index, expected_val, range, + queue_count, MAX_EVENTS); + return 0; +} + +static int +test_multi_queue_priority(void) +{ + int i, max_evts_roundoff; + /* See validate_queue_priority() comments for priority validate logic */ + uint32_t queue_count; + struct rte_mbuf *m; + uint8_t queue; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + if (queue_count > 8) + queue_count = 8; + max_evts_roundoff = MAX_EVENTS / queue_count; + max_evts_roundoff *= queue_count; + + for (i = 0; i < max_evts_roundoff; i++) { + struct rte_event ev = {.event = 0, .u64 = 0}; + + m = rte_pktmbuf_alloc(eventdev_test_mempool); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + + m->seqn = i; + queue = i % queue_count; + update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU, + 0, RTE_SCHED_TYPE_PARALLEL, + queue, 0); + rte_event_enqueue_burst(evdev, 0, &ev, 1); + } + + return consume_events(0, max_evts_roundoff, validate_queue_priority); +} + +static int +worker_multi_port_fn(void *arg) +{ + struct test_core_param *param = arg; + rte_atomic32_t *total_events = param->total_events; + uint8_t port = param->port; + uint16_t valid_event; + struct rte_event ev; + int ret; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + ret = validate_event(&ev); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } + + return 0; +} + +static inline int +wait_workers_to_join(const rte_atomic32_t *count) +{ + uint64_t cycles, print_cycles; + + cycles = rte_get_timer_cycles(); + print_cycles = cycles; + while (rte_atomic32_read(count)) { + uint64_t new_cycles = rte_get_timer_cycles(); + + if (new_cycles - print_cycles > rte_get_timer_hz()) { + otx2_err("Events %d", rte_atomic32_read(count)); + print_cycles = new_cycles; + } + if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) { + otx2_err("No schedules for seconds, deadlock (%d)", + rte_atomic32_read(count)); + rte_event_dev_dump(evdev, stdout); + cycles = new_cycles; + return -1; + } + } + rte_eal_mp_wait_lcore(); + + return 0; +} + +static inline int +launch_workers_and_wait(int (*master_worker)(void *), + int (*slave_workers)(void *), uint32_t total_events, + uint8_t nb_workers, uint8_t sched_type) +{ + rte_atomic32_t atomic_total_events; + struct test_core_param *param; + uint64_t dequeue_tmo_ticks; + uint8_t port = 0; + int w_lcore; + int ret; + + if (!nb_workers) + return 0; + + rte_atomic32_set(&atomic_total_events, total_events); + seqn_list_init(); + + param = malloc(sizeof(struct test_core_param) * nb_workers); + if (!param) + return -1; + + ret = rte_event_dequeue_timeout_ticks(evdev, + rte_rand() % 10000000/* 10ms */, + &dequeue_tmo_ticks); + if (ret) { + free(param); + return -1; + } + + param[0].total_events = &atomic_total_events; + param[0].sched_type = sched_type; + param[0].port = 0; + param[0].dequeue_tmo_ticks = dequeue_tmo_ticks; + rte_wmb(); + + w_lcore = rte_get_next_lcore( + /* start core */ -1, + /* skip master */ 1, + /* wrap */ 0); + rte_eal_remote_launch(master_worker, ¶m[0], w_lcore); + + for (port = 1; port < nb_workers; port++) { + param[port].total_events = &atomic_total_events; + param[port].sched_type = sched_type; + param[port].port = port; + param[port].dequeue_tmo_ticks = dequeue_tmo_ticks; + rte_smp_wmb(); + w_lcore = rte_get_next_lcore(w_lcore, 1, 0); + rte_eal_remote_launch(slave_workers, ¶m[port], w_lcore); + } + + rte_smp_wmb(); + ret = wait_workers_to_join(&atomic_total_events); + free(param); + + return ret; +} + +/* + * Generate a prescribed number of events and spread them across available + * queues. Dequeue the events through multiple ports and verify the enqueued + * event attributes + */ +static int +test_multi_queue_enq_multi_port_deq(void) +{ + const unsigned int total_events = MAX_EVENTS; + uint32_t nr_ports; + int ret; + + ret = generate_random_events(total_events); + if (ret) + return -1; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), + "Port count get failed"); + nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); + + if (!nr_ports) { + otx2_err("Not enough ports=%d or workers=%d", nr_ports, + rte_lcore_count() - 1); + return 0; + } + + return launch_workers_and_wait(worker_multi_port_fn, + worker_multi_port_fn, total_events, + nr_ports, 0xff /* invalid */); +} + +static +void flush(uint8_t dev_id, struct rte_event event, void *arg) +{ + unsigned int *count = arg; + + RTE_SET_USED(dev_id); + if (event.event_type == RTE_EVENT_TYPE_CPU) + *count = *count + 1; +} + +static int +test_dev_stop_flush(void) +{ + unsigned int total_events = MAX_EVENTS, count = 0; + int ret; + + ret = generate_random_events(total_events); + if (ret) + return -1; + + ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count); + if (ret) + return -2; + rte_event_dev_stop(evdev); + ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL); + if (ret) + return -3; + RTE_TEST_ASSERT_EQUAL(total_events, count, + "count mismatch total_events=%d count=%d", + total_events, count); + + return 0; +} + +static int +validate_queue_to_port_single_link(uint32_t index, uint8_t port, + struct rte_event *ev) +{ + RTE_SET_USED(index); + RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, + "queue mismatch enq=%d deq =%d", + port, ev->queue_id); + + return 0; +} + +/* + * Link queue x to port x and check correctness of link by checking + * queue_id == x on dequeue on the specific port x + */ +static int +test_queue_to_port_single_link(void) +{ + int i, nr_links, ret; + uint32_t queue_count; + uint32_t port_count; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), + "Port count get failed"); + + /* Unlink all connections that created in eventdev_setup */ + for (i = 0; i < (int)port_count; i++) { + ret = rte_event_port_unlink(evdev, i, NULL, 0); + RTE_TEST_ASSERT(ret >= 0, + "Failed to unlink all queues port=%d", i); + } + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + + nr_links = RTE_MIN(port_count, queue_count); + const unsigned int total_events = MAX_EVENTS / nr_links; + + /* Link queue x to port x and inject events to queue x through port x */ + for (i = 0; i < nr_links; i++) { + uint8_t queue = (uint8_t)i; + + ret = rte_event_port_link(evdev, i, &queue, NULL, 1); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + + ret = inject_events(0x100 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + rte_rand() % 256 /* sub_event_type */, + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), + queue /* queue */, i /* port */, + total_events /* events */); + if (ret) + return -1; + } + + /* Verify the events generated from correct queue */ + for (i = 0; i < nr_links; i++) { + ret = consume_events(i /* port */, total_events, + validate_queue_to_port_single_link); + if (ret) + return -1; + } + + return 0; +} + +static int +validate_queue_to_port_multi_link(uint32_t index, uint8_t port, + struct rte_event *ev) +{ + RTE_SET_USED(index); + RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + "queue mismatch enq=%d deq =%d", + port, ev->queue_id); + + return 0; +} + +/* + * Link all even number of queues to port 0 and all odd number of queues to + * port 1 and verify the link connection on dequeue + */ +static int +test_queue_to_port_multi_link(void) +{ + int ret, port0_events = 0, port1_events = 0; + uint32_t nr_queues = 0; + uint32_t nr_ports = 0; + uint8_t queue, port; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), + "Queue count get failed"); + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), + "Queue count get failed"); + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), + "Port count get failed"); + + if (nr_ports < 2) { + otx2_err("Not enough ports to test ports=%d", nr_ports); + return 0; + } + + /* Unlink all connections that created in eventdev_setup */ + for (port = 0; port < nr_ports; port++) { + ret = rte_event_port_unlink(evdev, port, NULL, 0); + RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + port); + } + + const unsigned int total_events = MAX_EVENTS / nr_queues; + + /* Link all even number of queues to port0 and odd numbers to port 1*/ + for (queue = 0; queue < nr_queues; queue++) { + port = queue & 0x1; + ret = rte_event_port_link(evdev, port, &queue, NULL, 1); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + queue, port); + + ret = inject_events(0x100 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + rte_rand() % 256 /* sub_event_type */, + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1), + queue /* queue */, port /* port */, + total_events /* events */); + if (ret) + return -1; + + if (port == 0) + port0_events += total_events; + else + port1_events += total_events; + } + + ret = consume_events(0 /* port */, port0_events, + validate_queue_to_port_multi_link); + if (ret) + return -1; + ret = consume_events(1 /* port */, port1_events, + validate_queue_to_port_multi_link); + if (ret) + return -1; + + return 0; +} + +static int +worker_flow_based_pipeline(void *arg) +{ + struct test_core_param *param = arg; + uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks; + rte_atomic32_t *total_events = param->total_events; + uint8_t new_sched_type = param->sched_type; + uint8_t port = param->port; + uint16_t valid_event; + struct rte_event ev; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, + dequeue_tmo_ticks); + if (!valid_event) + continue; + + /* Events from stage 0 */ + if (ev.sub_event_type == 0) { + /* Move to atomic flow to maintain the ordering */ + ev.flow_id = 0x2; + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.sub_event_type = 1; /* stage 1 */ + ev.sched_type = new_sched_type; + ev.op = RTE_EVENT_OP_FORWARD; + rte_event_enqueue_burst(evdev, port, &ev, 1); + } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ + if (seqn_list_update(ev.mbuf->seqn) == 0) { + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } else { + otx2_err("Failed to update seqn_list"); + return -1; + } + } else { + otx2_err("Invalid ev.sub_event_type = %d", + ev.sub_event_type); + return -1; + } + } + return 0; +} + +static int +test_multiport_flow_sched_type_test(uint8_t in_sched_type, + uint8_t out_sched_type) +{ + const unsigned int total_events = MAX_EVENTS; + uint32_t nr_ports; + int ret; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), + "Port count get failed"); + nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); + + if (!nr_ports) { + otx2_err("Not enough ports=%d or workers=%d", nr_ports, + rte_lcore_count() - 1); + return 0; + } + + /* Injects events with m->seqn=0 to total_events */ + ret = inject_events(0x1 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + 0 /* sub_event_type (stage 0) */, + in_sched_type, + 0 /* queue */, + 0 /* port */, + total_events /* events */); + if (ret) + return -1; + + rte_mb(); + ret = launch_workers_and_wait(worker_flow_based_pipeline, + worker_flow_based_pipeline, total_events, + nr_ports, out_sched_type); + if (ret) + return -1; + + if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && + out_sched_type == RTE_SCHED_TYPE_ATOMIC) { + /* Check the events order maintained or not */ + return seqn_list_check(total_events); + } + + return 0; +} + +/* Multi port ordered to atomic transaction */ +static int +test_multi_port_flow_ordered_to_atomic(void) +{ + /* Ingress event order test */ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED, + RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_multi_port_flow_ordered_to_ordered(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED, + RTE_SCHED_TYPE_ORDERED); +} + +static int +test_multi_port_flow_ordered_to_parallel(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED, + RTE_SCHED_TYPE_PARALLEL); +} + +static int +test_multi_port_flow_atomic_to_atomic(void) +{ + /* Ingress event order test */ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC, + RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_multi_port_flow_atomic_to_ordered(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC, + RTE_SCHED_TYPE_ORDERED); +} + +static int +test_multi_port_flow_atomic_to_parallel(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC, + RTE_SCHED_TYPE_PARALLEL); +} + +static int +test_multi_port_flow_parallel_to_atomic(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL, + RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_multi_port_flow_parallel_to_ordered(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL, + RTE_SCHED_TYPE_ORDERED); +} + +static int +test_multi_port_flow_parallel_to_parallel(void) +{ + return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL, + RTE_SCHED_TYPE_PARALLEL); +} + +static int +worker_group_based_pipeline(void *arg) +{ + struct test_core_param *param = arg; + uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks; + rte_atomic32_t *total_events = param->total_events; + uint8_t new_sched_type = param->sched_type; + uint8_t port = param->port; + uint16_t valid_event; + struct rte_event ev; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, + dequeue_tmo_ticks); + if (!valid_event) + continue; + + /* Events from stage 0(group 0) */ + if (ev.queue_id == 0) { + /* Move to atomic flow to maintain the ordering */ + ev.flow_id = 0x2; + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.sched_type = new_sched_type; + ev.queue_id = 1; /* Stage 1*/ + ev.op = RTE_EVENT_OP_FORWARD; + rte_event_enqueue_burst(evdev, port, &ev, 1); + } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ + if (seqn_list_update(ev.mbuf->seqn) == 0) { + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } else { + otx2_err("Failed to update seqn_list"); + return -1; + } + } else { + otx2_err("Invalid ev.queue_id = %d", ev.queue_id); + return -1; + } + } + + return 0; +} + +static int +test_multiport_queue_sched_type_test(uint8_t in_sched_type, + uint8_t out_sched_type) +{ + const unsigned int total_events = MAX_EVENTS; + uint32_t queue_count; + uint32_t nr_ports; + int ret; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), + "Port count get failed"); + + nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + if (queue_count < 2 || !nr_ports) { + otx2_err("Not enough queues=%d ports=%d or workers=%d", + queue_count, nr_ports, + rte_lcore_count() - 1); + return 0; + } + + /* Injects events with m->seqn=0 to total_events */ + ret = inject_events(0x1 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + 0 /* sub_event_type (stage 0) */, + in_sched_type, + 0 /* queue */, + 0 /* port */, + total_events /* events */); + if (ret) + return -1; + + ret = launch_workers_and_wait(worker_group_based_pipeline, + worker_group_based_pipeline, total_events, + nr_ports, out_sched_type); + if (ret) + return -1; + + if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && + out_sched_type == RTE_SCHED_TYPE_ATOMIC) { + /* Check the events order maintained or not */ + return seqn_list_check(total_events); + } + + return 0; +} + +static int +test_multi_port_queue_ordered_to_atomic(void) +{ + /* Ingress event order test */ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED, + RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_multi_port_queue_ordered_to_ordered(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED, + RTE_SCHED_TYPE_ORDERED); +} + +static int +test_multi_port_queue_ordered_to_parallel(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED, + RTE_SCHED_TYPE_PARALLEL); +} + +static int +test_multi_port_queue_atomic_to_atomic(void) +{ + /* Ingress event order test */ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC, + RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_multi_port_queue_atomic_to_ordered(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC, + RTE_SCHED_TYPE_ORDERED); +} + +static int +test_multi_port_queue_atomic_to_parallel(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC, + RTE_SCHED_TYPE_PARALLEL); +} + +static int +test_multi_port_queue_parallel_to_atomic(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL, + RTE_SCHED_TYPE_ATOMIC); +} + +static int +test_multi_port_queue_parallel_to_ordered(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL, + RTE_SCHED_TYPE_ORDERED); +} + +static int +test_multi_port_queue_parallel_to_parallel(void) +{ + return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL, + RTE_SCHED_TYPE_PARALLEL); +} + +static int +worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg) +{ + struct test_core_param *param = arg; + rte_atomic32_t *total_events = param->total_events; + uint8_t port = param->port; + uint16_t valid_event; + struct rte_event ev; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + if (ev.sub_event_type == 255) { /* last stage */ + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } else { + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.sub_event_type++; + ev.sched_type = + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1); + ev.op = RTE_EVENT_OP_FORWARD; + rte_event_enqueue_burst(evdev, port, &ev, 1); + } + } + + return 0; +} + +static int +launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) +{ + uint32_t nr_ports; + int ret; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), + "Port count get failed"); + nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); + + if (!nr_ports) { + otx2_err("Not enough ports=%d or workers=%d", + nr_ports, rte_lcore_count() - 1); + return 0; + } + + /* Injects events with m->seqn=0 to total_events */ + ret = inject_events(0x1 /*flow_id */, + RTE_EVENT_TYPE_CPU /* event_type */, + 0 /* sub_event_type (stage 0) */, + rte_rand() % + (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */, + 0 /* queue */, + 0 /* port */, + MAX_EVENTS /* events */); + if (ret) + return -1; + + return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, + 0xff /* invalid */); +} + +/* Flow based pipeline with maximum stages with random sched type */ +static int +test_multi_port_flow_max_stages_random_sched_type(void) +{ + return launch_multi_port_max_stages_random_sched_type( + worker_flow_based_pipeline_max_stages_rand_sched_type); +} + +static int +worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) +{ + struct test_core_param *param = arg; + uint8_t port = param->port; + uint32_t queue_count; + uint16_t valid_event; + struct rte_event ev; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + uint8_t nr_queues = queue_count; + rte_atomic32_t *total_events = param->total_events; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + if (ev.queue_id == nr_queues - 1) { /* last stage */ + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } else { + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id++; + ev.sched_type = + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1); + ev.op = RTE_EVENT_OP_FORWARD; + rte_event_enqueue_burst(evdev, port, &ev, 1); + } + } + + return 0; +} + +/* Queue based pipeline with maximum stages with random sched type */ +static int +test_multi_port_queue_max_stages_random_sched_type(void) +{ + return launch_multi_port_max_stages_random_sched_type( + worker_queue_based_pipeline_max_stages_rand_sched_type); +} + +static int +worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) +{ + struct test_core_param *param = arg; + uint8_t port = param->port; + uint32_t queue_count; + uint16_t valid_event; + struct rte_event ev; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), + "Queue count get failed"); + uint8_t nr_queues = queue_count; + rte_atomic32_t *total_events = param->total_events; + + while (rte_atomic32_read(total_events) > 0) { + valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); + if (!valid_event) + continue; + + if (ev.queue_id == nr_queues - 1) { /* Last stage */ + rte_pktmbuf_free(ev.mbuf); + rte_atomic32_sub(total_events, 1); + } else { + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id++; + ev.sub_event_type = rte_rand() % 256; + ev.sched_type = + rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1); + ev.op = RTE_EVENT_OP_FORWARD; + rte_event_enqueue_burst(evdev, port, &ev, 1); + } + } + + return 0; +} + +/* Queue and flow based pipeline with maximum stages with random sched type */ +static int +test_multi_port_mixed_max_stages_random_sched_type(void) +{ + return launch_multi_port_max_stages_random_sched_type( + worker_mixed_pipeline_max_stages_rand_sched_type); +} + +static int +worker_ordered_flow_producer(void *arg) +{ + struct test_core_param *param = arg; + uint8_t port = param->port; + struct rte_mbuf *m; + int counter = 0; + + while (counter < NUM_PACKETS) { + m = rte_pktmbuf_alloc(eventdev_test_mempool); + if (m == NULL) + continue; + + m->seqn = counter++; + + struct rte_event ev = {.event = 0, .u64 = 0}; + + ev.flow_id = 0x1; /* Generate a fat flow */ + ev.sub_event_type = 0; + /* Inject the new event */ + ev.op = RTE_EVENT_OP_NEW; + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.sched_type = RTE_SCHED_TYPE_ORDERED; + ev.queue_id = 0; + ev.mbuf = m; + rte_event_enqueue_burst(evdev, port, &ev, 1); + } + + return 0; +} + +static inline int +test_producer_consumer_ingress_order_test(int (*fn)(void *)) +{ + uint32_t nr_ports; + + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), + "Port count get failed"); + nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); + + if (rte_lcore_count() < 3 || nr_ports < 2) { + otx2_err("### Not enough cores for test."); + return 0; + } + + launch_workers_and_wait(worker_ordered_flow_producer, fn, + NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC); + /* Check the events order maintained or not */ + return seqn_list_check(NUM_PACKETS); +} + +/* Flow based producer consumer ingress order test */ +static int +test_flow_producer_consumer_ingress_order_test(void) +{ + return test_producer_consumer_ingress_order_test( + worker_flow_based_pipeline); +} + +/* Queue based producer consumer ingress order test */ +static int +test_queue_producer_consumer_ingress_order_test(void) +{ + return test_producer_consumer_ingress_order_test( + worker_group_based_pipeline); +} + +static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + printf("Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + printf("+ TestCase [%2d] : %s failed\n", total, name); + } else { + passed++; + printf("+ TestCase [%2d] : %s succeeded\n", total, + name); + } + } + + total++; + tdown(); +} + +int +otx2_sso_selftest(void) +{ + testsuite_setup(); + + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_dev_stop_flush); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_ordered); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_parallel); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_max_stages_random_sched_type); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_max_stages_random_sched_type); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_mixed_max_stages_random_sched_type); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_flow_producer_consumer_ingress_order_test); + OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_producer_consumer_ingress_order_test); + OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown, + test_multi_queue_priority); + OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + printf("Total tests : %d\n", total); + printf("Passed : %d\n", passed); + printf("Failed : %d\n", failed); + printf("Not supported : %d\n", unsupported); + + testsuite_teardown(); + + if (failed) + return -1; + + return 0; +} From patchwork Fri Jun 28 18:23:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55631 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DB2B51BACD; Fri, 28 Jun 2019 20:25:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 9A7ED1B9F0 for ; Fri, 28 Jun 2019 20:25:01 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILsKQ011627; Fri, 28 Jun 2019 11:25:00 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=50nHWItAX31rIxFZ3KvX+OUdcPvLZ5wAZqy/mFSibpg=; b=NbgBhSTBq0lOXwivOpKBldPJcdzYFSVLZVZvafEDF3MxZ1lutiI6Kw3CwIksOOmqsOnj wXdR2MSNT2SOnTL+yhVGuRjZzA/88xn8c9kMn+qfg9OvfL18OzMiWtCvkRZ5dpTywmCW dU7poj48Orpo5KM913cFoO4QIFaKQuxKhlgZPgQu6LMC0RFJ1ts8A9RpTIBX0wU1RqXk lWxXI34u4fTuwaTH+iv8bna318WaykjnnbPNvsTu0ipsMWim+7QzxksceJzGTTild2na 9AOjLaiUyEVSpaPlCZ3qc6SvXL1THze1DgZ4CPqnUncEo7flfJ8tEU+WYyzewZCeU4Ir 3Q== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agp1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:00 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:24:59 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:24:59 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 9A5413F7040; Fri, 28 Jun 2019 11:24:57 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic , Anatoly Burakov CC: Date: Fri, 28 Jun 2019 23:53:37 +0530 Message-ID: <20190628182354.228-27-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 26/42] event/octeontx2: add event timer support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event timer adapter aka TIM initilization on SSO probe. Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 6 ++ drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 1 + drivers/event/octeontx2/otx2_evdev.c | 3 + drivers/event/octeontx2/otx2_tim_evdev.c | 78 ++++++++++++++++++++++++ drivers/event/octeontx2/otx2_tim_evdev.h | 36 +++++++++++ 6 files changed, 125 insertions(+) create mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c create mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index 562a83d07..98d0dfb6f 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -28,6 +28,10 @@ Features of the OCTEON TX2 SSO PMD are: - Open system with configurable amount of outstanding events limited only by DRAM - HW accelerated dequeue timeout support to enable power management +- HW managed event timers support through TIM, with high precision and + time granularity of 2.5us. +- Up to 256 TIM rings aka event timer adapters. +- Up to 8 rings traversed in parallel. Prerequisites and Compilation procedure --------------------------------------- @@ -102,3 +106,5 @@ Debugging Options +===+============+=======================================================+ | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' | +---+------------+-------------------------------------------------------+ + | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' | + +---+------------+-------------------------------------------------------+ diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index d6cffc1f6..2290622dd 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -33,6 +33,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_evdev.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index 470564b08..ad7f2e084 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -7,6 +7,7 @@ sources = files('otx2_worker.c', 'otx2_evdev.c', 'otx2_evdev_irq.c', 'otx2_evdev_selftest.c', + 'otx2_tim_evdev.c', ) allow_experimental_apis = true diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index c5a150954..a716167b3 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -15,6 +15,7 @@ #include "otx2_evdev_stats.h" #include "otx2_evdev.h" #include "otx2_irq.h" +#include "otx2_tim_evdev.h" static inline int sso_get_msix_offsets(const struct rte_eventdev *event_dev) @@ -1310,6 +1311,7 @@ otx2_sso_init(struct rte_eventdev *event_dev) event_dev->dev_ops->dev_selftest(); } + otx2_tim_init(pci_dev, (struct otx2_dev *)dev); return 0; @@ -1345,6 +1347,7 @@ otx2_sso_fini(struct rte_eventdev *event_dev) return -EAGAIN; } + otx2_tim_fini(); otx2_dev_fini(pci_dev, dev); return 0; diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c new file mode 100644 index 000000000..004701f64 --- /dev/null +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_evdev.h" +#include "otx2_tim_evdev.h" + +void +otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) +{ + struct rsrc_attach_req *atch_req; + struct free_rsrcs_rsp *rsrc_cnt; + const struct rte_memzone *mz; + struct otx2_tim_evdev *dev; + int rc; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME), + sizeof(struct otx2_tim_evdev), + rte_socket_id(), 0); + if (mz == NULL) { + otx2_tim_dbg("Unable to allocate memory for TIM Event device"); + return; + } + + dev = mz->addr; + dev->pci_dev = pci_dev; + dev->mbox = cmn_dev->mbox; + dev->bar2 = cmn_dev->bar2; + + otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox); + rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt); + if (rc < 0) { + otx2_err("Unable to get free rsrc count."); + goto mz_free; + } + + dev->nb_rings = rsrc_cnt->tim; + + if (!dev->nb_rings) { + otx2_tim_dbg("No TIM Logical functions provisioned."); + goto mz_free; + } + + atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox); + atch_req->modify = true; + atch_req->timlfs = dev->nb_rings; + + rc = otx2_mbox_process(dev->mbox); + if (rc < 0) { + otx2_err("Unable to attach TIM rings."); + goto mz_free; + } + + return; + +mz_free: + rte_memzone_free(mz); +} + +void +otx2_tim_fini(void) +{ + struct otx2_tim_evdev *dev = tim_priv_get(); + struct rsrc_detach_req *dtch_req; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return; + + dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox); + dtch_req->partial = true; + dtch_req->timlfs = true; + + otx2_mbox_process(dev->mbox); + rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME))); +} diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h new file mode 100644 index 000000000..9f7aeb7df --- /dev/null +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TIM_EVDEV_H__ +#define __OTX2_TIM_EVDEV_H__ + +#include + +#include "otx2_dev.h" + +#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev + +struct otx2_tim_evdev { + struct rte_pci_device *pci_dev; + struct otx2_mbox *mbox; + uint16_t nb_rings; + uintptr_t bar2; +}; + +static inline struct otx2_tim_evdev * +tim_priv_get(void) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)); + if (mz == NULL) + return NULL; + + return mz->addr; +} + +void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev); +void otx2_tim_fini(void); + +#endif /* __OTX2_TIM_EVDEV_H__ */ From patchwork Fri Jun 28 18:23:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55632 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5D5C41BAE5; Fri, 28 Jun 2019 20:25:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D72D34C99 for ; Fri, 28 Jun 2019 20:25:03 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIL8JS011336 for ; Fri, 28 Jun 2019 11:25:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=epZLLHyNB62e1TNV/p0DVwskkSUYYhHCw0/vDFrfnvY=; b=qzIkbIuvdnWna1/gZ/ZR/H27cMsdVjzoJo40fnkdCXJGAz4QeCGH3jUjKdmpmjFM6O1H /6ZzE9KU6GnI/3mA/kguBmq9TorBomH5sPRFcL4w/NIXdCz1kF8mzumZVyqrUHOp0n0A bAyJ7txt3Bx62N7P10wUjtIPKA3Rcayg9bgUQTlleezlhan3czFNrrFOq312BqvM/nwf dWiOfVVXeTSbNibwh1YXrKNr73i0+OKLIu5Gy5Pc0XCaLaTQllGxTd/Hs/RI8jqhcdsf PIHXxeIc8Qax4w5jIWyhuBufdMkrAQfQ5OsQfisPHMH1vjCwIIgZERz15yb3HknMoigx JQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agp5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:03 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:01 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:01 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 8B5943F7040; Fri, 28 Jun 2019 11:25:00 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:38 +0530 Message-ID: <20190628182354.228-28-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 27/42] event/octeontx2: add timer adapter capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add function to retrieve event timer adapter capabilities. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.c | 2 ++ drivers/event/octeontx2/otx2_tim_evdev.c | 19 +++++++++++++++++++ drivers/event/octeontx2/otx2_tim_evdev.h | 5 +++++ 3 files changed, 26 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index a716167b3..a1222b3cf 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1092,6 +1092,8 @@ static struct rte_eventdev_ops otx2_sso_ops = { .port_unlink = otx2_sso_port_unlink, .timeout_ticks = otx2_sso_timeout_ticks, + .timer_adapter_caps_get = otx2_tim_caps_get, + .xstats_get = otx2_sso_xstats_get, .xstats_reset = otx2_sso_xstats_reset, .xstats_get_names = otx2_sso_xstats_get_names, diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index 004701f64..0f20c163b 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -5,6 +5,25 @@ #include "otx2_evdev.h" #include "otx2_tim_evdev.h" +int +otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, + uint32_t *caps, + const struct rte_event_timer_adapter_ops **ops) +{ + struct otx2_tim_evdev *dev = tim_priv_get(); + + RTE_SET_USED(flags); + RTE_SET_USED(ops); + if (dev == NULL) + return -ENODEV; + + /* Store evdev pointer for later use. */ + dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev; + *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) { diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 9f7aeb7df..e94c61b1a 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -13,6 +13,7 @@ struct otx2_tim_evdev { struct rte_pci_device *pci_dev; + struct rte_eventdev *event_dev; struct otx2_mbox *mbox; uint16_t nb_rings; uintptr_t bar2; @@ -30,6 +31,10 @@ tim_priv_get(void) return mz->addr; } +int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, + uint32_t *caps, + const struct rte_event_timer_adapter_ops **ops); + void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev); void otx2_tim_fini(void); From patchwork Fri Jun 28 18:23:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55633 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 573D41BB0B; Fri, 28 Jun 2019 20:25:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id F18481B9F0 for ; Fri, 28 Jun 2019 20:25:05 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILsKR011627 for ; Fri, 28 Jun 2019 11:25:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=AmhWeoXuLyW9K92jth8g8FG3BnEXL4ulZG3NpHQckf8=; b=DPDBmJ8DNBITHX01jeWruv0AMR9KFZwFuMkxU7tAHf7BSPjFoC8wHGFSjiUsaLc/k2Cr P0BB8sjqvkh2C5RFM3DEhH5aSQzJu6Itpzsghkq/wv1/BPKA3zrwFS0ltHbVSUaEqLge CoiIn71rLVTf/UtcUZ2JQptoi6MtGijI82Hghbu/6leAL2Bu2aLlDIeGRhGkjk5lpVA5 NGf3hQrnfN/2GSa0ZjdUD4Y/Ou9cO/CseriPliRqmk7LVX1wAZyP06UPJI8oLmcTWcpL f/bmlGgEZQraDH3Q7LM4tOAlrU7FldokVkHlDbw/vlQXUIIaVVES+y+gp9oQKY8lmNex qQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agpb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:05 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:03 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:03 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id B05823F7053; Fri, 28 Jun 2019 11:25:02 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:39 +0530 Message-ID: <20190628182354.228-29-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 28/42] event/octeontx2: create and free timer adapter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh When the application calls timer adapter create the following is used: - Allocate a TIM lf based on number of lf's provisioned. - Verify the config parameters supplied. - Allocate memory required for * Buckets based on min and max timeout supplied. * Allocate the chunk pool based on the number of timers. On Free: - Free the allocated bucket and chunk memory. - Free the TIM lf allocated. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_evdev.c | 259 ++++++++++++++++++++++- drivers/event/octeontx2/otx2_tim_evdev.h | 55 +++++ 2 files changed, 313 insertions(+), 1 deletion(-) diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index 0f20c163b..e24f7ce9e 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -2,9 +2,263 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include +#include + #include "otx2_evdev.h" #include "otx2_tim_evdev.h" +static struct rte_event_timer_adapter_ops otx2_tim_ops; + +static int +tim_chnk_pool_create(struct otx2_tim_ring *tim_ring, + struct rte_event_timer_adapter_conf *rcfg) +{ + unsigned int cache_sz = (tim_ring->nb_chunks / 1.5); + unsigned int mp_flags = 0; + char pool_name[25]; + int rc; + + /* Create chunk pool. */ + if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) { + mp_flags = MEMPOOL_F_SP_PUT | MEMPOOL_F_SC_GET; + otx2_tim_dbg("Using single producer mode"); + tim_ring->prod_type_sp = true; + } + + snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d", + tim_ring->ring_id); + + if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE) + cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE; + + /* NPA need not have cache as free is not visible to SW */ + tim_ring->chunk_pool = rte_mempool_create_empty(pool_name, + tim_ring->nb_chunks, + tim_ring->chunk_sz, + 0, 0, rte_socket_id(), + mp_flags); + + if (tim_ring->chunk_pool == NULL) { + otx2_err("Unable to create chunkpool."); + return -ENOMEM; + } + + rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool, + rte_mbuf_platform_mempool_ops(), NULL); + if (rc < 0) { + otx2_err("Unable to set chunkpool ops"); + goto free; + } + + rc = rte_mempool_populate_default(tim_ring->chunk_pool); + if (rc < 0) { + otx2_err("Unable to set populate chunkpool."); + goto free; + } + tim_ring->aura = npa_lf_aura_handle_to_aura( + tim_ring->chunk_pool->pool_id); + tim_ring->ena_dfb = 0; + + return 0; + +free: + rte_mempool_free(tim_ring->chunk_pool); + return rc; +} + +static void +tim_err_desc(int rc) +{ + switch (rc) { + case TIM_AF_NO_RINGS_LEFT: + otx2_err("Unable to allocat new TIM ring."); + break; + case TIM_AF_INVALID_NPA_PF_FUNC: + otx2_err("Invalid NPA pf func."); + break; + case TIM_AF_INVALID_SSO_PF_FUNC: + otx2_err("Invalid SSO pf func."); + break; + case TIM_AF_RING_STILL_RUNNING: + otx2_tim_dbg("Ring busy."); + break; + case TIM_AF_LF_INVALID: + otx2_err("Invalid Ring id."); + break; + case TIM_AF_CSIZE_NOT_ALIGNED: + otx2_err("Chunk size specified needs to be multiple of 16."); + break; + case TIM_AF_CSIZE_TOO_SMALL: + otx2_err("Chunk size too small."); + break; + case TIM_AF_CSIZE_TOO_BIG: + otx2_err("Chunk size too big."); + break; + case TIM_AF_INTERVAL_TOO_SMALL: + otx2_err("Bucket traversal interval too small."); + break; + case TIM_AF_INVALID_BIG_ENDIAN_VALUE: + otx2_err("Invalid Big endian value."); + break; + case TIM_AF_INVALID_CLOCK_SOURCE: + otx2_err("Invalid Clock source specified."); + break; + case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED: + otx2_err("GPIO clock source not enabled."); + break; + case TIM_AF_INVALID_BSIZE: + otx2_err("Invalid bucket size."); + break; + case TIM_AF_INVALID_ENABLE_PERIODIC: + otx2_err("Invalid bucket size."); + break; + case TIM_AF_INVALID_ENABLE_DONTFREE: + otx2_err("Invalid Don't free value."); + break; + case TIM_AF_ENA_DONTFRE_NSET_PERIODIC: + otx2_err("Don't free bit not set when periodic is enabled."); + break; + case TIM_AF_RING_ALREADY_DISABLED: + otx2_err("Ring already stopped"); + break; + default: + otx2_err("Unknown Error."); + } +} + +static int +otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) +{ + struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf; + struct otx2_tim_evdev *dev = tim_priv_get(); + struct otx2_tim_ring *tim_ring; + struct tim_config_req *cfg_req; + struct tim_ring_req *free_req; + struct tim_lf_alloc_req *req; + struct tim_lf_alloc_rsp *rsp; + uint64_t nb_timers; + int rc; + + if (dev == NULL) + return -ENODEV; + + if (adptr->data->id >= dev->nb_rings) + return -ENODEV; + + req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox); + req->npa_pf_func = otx2_npa_pf_func_get(); + req->sso_pf_func = otx2_sso_pf_func_get(); + req->ring = adptr->data->id; + + rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp); + if (rc < 0) { + tim_err_desc(rc); + return -ENODEV; + } + + if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10), + rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) { + rc = -ERANGE; + goto rng_mem_err; + } + + tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0); + if (tim_ring == NULL) { + rc = -ENOMEM; + goto rng_mem_err; + } + + adptr->data->adapter_priv = tim_ring; + + tim_ring->tenns_clk_freq = rsp->tenns_clk; + tim_ring->clk_src = (int)rcfg->clk_src; + tim_ring->ring_id = adptr->data->id; + tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10); + tim_ring->max_tout = rcfg->max_tmo_ns; + tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec); + tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ; + nb_timers = rcfg->nb_timers; + tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS( + tim_ring->chunk_sz); + tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); + + /* Create buckets. */ + tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) * + sizeof(struct otx2_tim_bkt), + RTE_CACHE_LINE_SIZE); + if (tim_ring->bkt == NULL) + goto bkt_mem_err; + + rc = tim_chnk_pool_create(tim_ring, rcfg); + if (rc < 0) + goto chnk_mem_err; + + cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox); + + cfg_req->ring = tim_ring->ring_id; + cfg_req->bigendian = false; + cfg_req->clocksource = tim_ring->clk_src; + cfg_req->enableperiodic = false; + cfg_req->enabledontfreebuffer = tim_ring->ena_dfb; + cfg_req->bucketsize = tim_ring->nb_bkts; + cfg_req->chunksize = tim_ring->chunk_sz; + cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec, + tim_ring->tenns_clk_freq); + + rc = otx2_mbox_process(dev->mbox); + if (rc < 0) { + tim_err_desc(rc); + goto chnk_mem_err; + } + + tim_ring->base = dev->bar2 + + (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12); + + otx2_write64((uint64_t)tim_ring->bkt, + tim_ring->base + TIM_LF_RING_BASE); + otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA); + + return rc; + +chnk_mem_err: + rte_free(tim_ring->bkt); +bkt_mem_err: + rte_free(tim_ring); +rng_mem_err: + free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox); + free_req->ring = adptr->data->id; + otx2_mbox_process(dev->mbox); + return rc; +} + +static int +otx2_tim_ring_free(struct rte_event_timer_adapter *adptr) +{ + struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; + struct otx2_tim_evdev *dev = tim_priv_get(); + struct tim_ring_req *req; + int rc; + + if (dev == NULL) + return -ENODEV; + + req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox); + req->ring = tim_ring->ring_id; + + rc = otx2_mbox_process(dev->mbox); + if (rc < 0) { + tim_err_desc(rc); + return -EBUSY; + } + + rte_free(tim_ring->bkt); + rte_mempool_free(tim_ring->chunk_pool); + rte_free(adptr->data->adapter_priv); + + return 0; +} + int otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, uint32_t *caps, @@ -13,13 +267,16 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, struct otx2_tim_evdev *dev = tim_priv_get(); RTE_SET_USED(flags); - RTE_SET_USED(ops); if (dev == NULL) return -ENODEV; + otx2_tim_ops.init = otx2_tim_ring_create; + otx2_tim_ops.uninit = otx2_tim_ring_free; + /* Store evdev pointer for later use. */ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev; *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT; + *ops = &otx2_tim_ops; return 0; } diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index e94c61b1a..aaa4d93f5 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -6,11 +6,47 @@ #define __OTX2_TIM_EVDEV_H__ #include +#include #include "otx2_dev.h" #define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev +#define otx2_tim_func_trace otx2_tim_dbg + +#define TIM_LF_RING_AURA (0x0) +#define TIM_LF_RING_BASE (0x130) + +#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096) +#define OTX2_TIM_CHUNK_ALIGNMENT (16) +#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1) +#define OTX2_TIM_MIN_TMO_TKS (256) + +enum otx2_tim_clk_src { + OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK, + OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0, + OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1, + OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2, +}; + +struct otx2_tim_bkt { + uint64_t first_chunk; + union { + uint64_t w1; + struct { + uint32_t nb_entry; + uint8_t sbt:1; + uint8_t hbt:1; + uint8_t bsk:1; + uint8_t rsvd:5; + uint8_t lock; + int16_t chunk_remainder; + }; + }; + uint64_t current_chunk; + uint64_t pad; +} __rte_packed __rte_aligned(32); + struct otx2_tim_evdev { struct rte_pci_device *pci_dev; struct rte_eventdev *event_dev; @@ -19,6 +55,25 @@ struct otx2_tim_evdev { uintptr_t bar2; }; +struct otx2_tim_ring { + uintptr_t base; + uint16_t nb_chunk_slots; + uint32_t nb_bkts; + struct otx2_tim_bkt *bkt; + struct rte_mempool *chunk_pool; + uint64_t tck_int; + uint8_t prod_type_sp; + uint8_t ena_dfb; + uint16_t ring_id; + uint32_t aura; + uint64_t tck_nsec; + uint64_t max_tout; + uint64_t nb_chunks; + uint64_t chunk_sz; + uint64_t tenns_clk_freq; + enum otx2_tim_clk_src clk_src; +} __rte_cache_aligned; + static inline struct otx2_tim_evdev * tim_priv_get(void) { From patchwork Fri Jun 28 18:23:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55634 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3C80A1BB0C; Fri, 28 Jun 2019 20:25:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id CD5B61B9F7 for ; Fri, 28 Jun 2019 20:25:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhtx010886 for ; Fri, 28 Jun 2019 11:25:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=mBekD5UTrCG27xzjYG0c9H8OeLt0NiHOB3FbOKcTARY=; b=Pu4oKcDyGvJ8V4eDqICBSt/B67KeEb9e14R/9YH3Y4VrXGCncSFnqQ9dcFvGsPGRb3P7 zsP69CCiaFHWAt6+B1qhiJg1Exs+bvJWD3BLpPjYgSMD6rnNnirG5yHW9EFH8LIQh4Hu EMsuvM+ONT2/7gxOvdjwxQKBbBYERCRBJgIalA8T0ZaX0gQ9/LiV8Ph2UlAoO2WcYlxC HuJZvL1/QH0dGXIhLeXLNI8byhdjd5HloyRR04Q2ay9WVao7CPp09RzTkj6+NDhlSskI U0IBej/BVCLq9Xf7tuL6LzFuZHEbwceCMz/l8nrg35XqqErvFi3Xeh0d+cuPOFuUlxxa jQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agpj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:06 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:05 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:05 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id CEF243F7040; Fri, 28 Jun 2019 11:25:04 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:40 +0530 Message-ID: <20190628182354.228-30-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 29/42] event/octeontx2: allow TIM to optimize config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Allow TIM to optimize user supplied configuration based on RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES flag. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev.h | 1 + drivers/event/octeontx2/otx2_tim_evdev.c | 62 +++++++++++++++++++++++- drivers/event/octeontx2/otx2_tim_evdev.h | 3 ++ 3 files changed, 64 insertions(+), 2 deletions(-) diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index fc8dde416..1e15b7e1c 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -76,6 +76,7 @@ #define NSEC2USEC(__ns) ((__ns) / 1E3) #define USEC2NSEC(__us) ((__us) * 1E3) #define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) +#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq)) enum otx2_sso_lf_type { SSO_LF_GGRP, diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index e24f7ce9e..a0953bb49 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -10,6 +10,51 @@ static struct rte_event_timer_adapter_ops otx2_tim_ops; +static void +tim_optimze_bkt_param(struct otx2_tim_ring *tim_ring) +{ + uint64_t tck_nsec; + uint32_t hbkts; + uint32_t lbkts; + + hbkts = rte_align32pow2(tim_ring->nb_bkts); + tck_nsec = RTE_ALIGN_MUL_CEIL(tim_ring->max_tout / (hbkts - 1), 10); + + if ((tck_nsec < TICK2NSEC(OTX2_TIM_MIN_TMO_TKS, + tim_ring->tenns_clk_freq) || + hbkts > OTX2_TIM_MAX_BUCKETS)) + hbkts = 0; + + lbkts = rte_align32prevpow2(tim_ring->nb_bkts); + tck_nsec = RTE_ALIGN_MUL_CEIL((tim_ring->max_tout / (lbkts - 1)), 10); + + if ((tck_nsec < TICK2NSEC(OTX2_TIM_MIN_TMO_TKS, + tim_ring->tenns_clk_freq) || + lbkts > OTX2_TIM_MAX_BUCKETS)) + lbkts = 0; + + if (!hbkts && !lbkts) + return; + + if (!hbkts) { + tim_ring->nb_bkts = lbkts; + goto end; + } else if (!lbkts) { + tim_ring->nb_bkts = hbkts; + goto end; + } + + tim_ring->nb_bkts = (hbkts - tim_ring->nb_bkts) < + (tim_ring->nb_bkts - lbkts) ? hbkts : lbkts; +end: + tim_ring->optimized = true; + tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL((tim_ring->max_tout / + (tim_ring->nb_bkts - 1)), 10); + otx2_tim_dbg("Optimized configured values"); + otx2_tim_dbg("Nb_bkts : %" PRIu32 "", tim_ring->nb_bkts); + otx2_tim_dbg("Tck_nsec : %" PRIu64 "", tim_ring->tck_nsec); +} + static int tim_chnk_pool_create(struct otx2_tim_ring *tim_ring, struct rte_event_timer_adapter_conf *rcfg) @@ -159,8 +204,13 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10), rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) { - rc = -ERANGE; - goto rng_mem_err; + if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES) + rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS, + rsp->tenns_clk); + else { + rc = -ERANGE; + goto rng_mem_err; + } } tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0); @@ -183,6 +233,14 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->chunk_sz); tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); + /* Try to optimize the bucket parameters. */ + if ((rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)) { + if (rte_is_power_of_2(tim_ring->nb_bkts)) + tim_ring->optimized = true; + else + tim_optimze_bkt_param(tim_ring); + } + /* Create buckets. */ tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) * sizeof(struct otx2_tim_bkt), diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index aaa4d93f5..fdd076ebd 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -17,6 +17,8 @@ #define TIM_LF_RING_AURA (0x0) #define TIM_LF_RING_BASE (0x130) +#define OTX2_MAX_TIM_RINGS (256) +#define OTX2_TIM_MAX_BUCKETS (0xFFFFF) #define OTX2_TIM_RING_DEF_CHUNK_SZ (4096) #define OTX2_TIM_CHUNK_ALIGNMENT (16) #define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1) @@ -63,6 +65,7 @@ struct otx2_tim_ring { struct rte_mempool *chunk_pool; uint64_t tck_int; uint8_t prod_type_sp; + uint8_t optimized; uint8_t ena_dfb; uint16_t ring_id; uint32_t aura; From patchwork Fri Jun 28 18:23:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55635 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 92D1E1BB21; Fri, 28 Jun 2019 20:25:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 516041B9FE for ; Fri, 28 Jun 2019 20:25:11 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILDX3011345; Fri, 28 Jun 2019 11:25:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=suNgDg8Ad5QRsv65t8Lq2VYUiXlB2rtcABEormXwxzA=; b=cYKZ/rnfaaeiosSre/pAWYOL1GpQzc0XcHxZUUjvOTFjcKFQ5BupY5QIhUsS2v4jGAlM /h9O+hASlBJp+E77DCrcfBjIjLsc6ML1rCzcwUWUhroICvH+NY8gVbXjVKMNAamxYBpQ t5sKuITyDipmmp+7g9YvzL3H2qw8aV6MCn0VRuHLqryKj+MA+jcGoQQtlvnKLkz1SRd5 vUC3wjrbonYYvaSoBBMM08mQZXsfgcpV96ewSpxckN1nqASYOxNaHJTPba/6pwcL/YY6 S9U52uLw0/vCl8VH6ZSkIE6omOirNMLa1iWk/P+a84lwson2Qtquh9jot/n1v5+bk472 hg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agpu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:10 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:09 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 767523F7040; Fri, 28 Jun 2019 11:25:07 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:41 +0530 Message-ID: <20190628182354.228-31-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 30/42] event/octeontx2: add devargs to disable NPA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh If the chunks are allocated from NPA then TIM can automatically free them when traversing the list of chunks. Add devargs to disable NPA and use software mempool to manage chunks. Example: --dev "0002:0e:00.0,tim_disable_npa=1" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 9 +++ drivers/event/octeontx2/otx2_tim_evdev.c | 81 +++++++++++++++++------- drivers/event/octeontx2/otx2_tim_evdev.h | 3 + 3 files changed, 70 insertions(+), 23 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index 98d0dfb6f..d24f81629 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -94,6 +94,15 @@ Runtime Config Options --dev "0002:0e:00.0,selftest=1" +- ``TIM disable NPA`` + + By default chunks are allocated from NPA then TIM can automatically free + them when traversing the list of chunks. The ``tim_disable_npa`` devargs + parameter disables NPA and uses software mempool to manage chunks + For example:: + + --dev "0002:0e:00.0,tim_disable_npa=1" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index a0953bb49..4b9816676 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -2,6 +2,7 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include #include #include @@ -77,33 +78,45 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring, if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE) cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE; - /* NPA need not have cache as free is not visible to SW */ - tim_ring->chunk_pool = rte_mempool_create_empty(pool_name, - tim_ring->nb_chunks, - tim_ring->chunk_sz, - 0, 0, rte_socket_id(), - mp_flags); + if (!tim_ring->disable_npa) { + /* NPA need not have cache as free is not visible to SW */ + tim_ring->chunk_pool = rte_mempool_create_empty(pool_name, + tim_ring->nb_chunks, tim_ring->chunk_sz, + 0, 0, rte_socket_id(), mp_flags); - if (tim_ring->chunk_pool == NULL) { - otx2_err("Unable to create chunkpool."); - return -ENOMEM; - } + if (tim_ring->chunk_pool == NULL) { + otx2_err("Unable to create chunkpool."); + return -ENOMEM; + } - rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool, - rte_mbuf_platform_mempool_ops(), NULL); - if (rc < 0) { - otx2_err("Unable to set chunkpool ops"); - goto free; - } + rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool, + rte_mbuf_platform_mempool_ops(), + NULL); + if (rc < 0) { + otx2_err("Unable to set chunkpool ops"); + goto free; + } - rc = rte_mempool_populate_default(tim_ring->chunk_pool); - if (rc < 0) { - otx2_err("Unable to set populate chunkpool."); - goto free; + rc = rte_mempool_populate_default(tim_ring->chunk_pool); + if (rc < 0) { + otx2_err("Unable to set populate chunkpool."); + goto free; + } + tim_ring->aura = npa_lf_aura_handle_to_aura( + tim_ring->chunk_pool->pool_id); + tim_ring->ena_dfb = 0; + } else { + tim_ring->chunk_pool = rte_mempool_create(pool_name, + tim_ring->nb_chunks, tim_ring->chunk_sz, + cache_sz, 0, NULL, NULL, NULL, NULL, + rte_socket_id(), + mp_flags); + if (tim_ring->chunk_pool == NULL) { + otx2_err("Unable to create chunkpool."); + return -ENOMEM; + } + tim_ring->ena_dfb = 1; } - tim_ring->aura = npa_lf_aura_handle_to_aura( - tim_ring->chunk_pool->pool_id); - tim_ring->ena_dfb = 0; return 0; @@ -229,6 +242,8 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec); tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ; nb_timers = rcfg->nb_timers; + tim_ring->disable_npa = dev->disable_npa; + tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS( tim_ring->chunk_sz); tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); @@ -339,6 +354,24 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, return 0; } +#define OTX2_TIM_DISABLE_NPA "tim_disable_npa" + +static void +tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) +{ + struct rte_kvargs *kvlist; + + if (devargs == NULL) + return; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + return; + + rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA, + &parse_kvargs_flag, &dev->disable_npa); +} + void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) { @@ -364,6 +397,8 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) dev->mbox = cmn_dev->mbox; dev->bar2 = cmn_dev->bar2; + tim_parse_devargs(pci_dev->device.devargs, dev); + otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt); if (rc < 0) { diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index fdd076ebd..0a0a0b4d8 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -55,6 +55,8 @@ struct otx2_tim_evdev { struct otx2_mbox *mbox; uint16_t nb_rings; uintptr_t bar2; + /* Dev args */ + uint8_t disable_npa; }; struct otx2_tim_ring { @@ -65,6 +67,7 @@ struct otx2_tim_ring { struct rte_mempool *chunk_pool; uint64_t tck_int; uint8_t prod_type_sp; + uint8_t disable_npa; uint8_t optimized; uint8_t ena_dfb; uint16_t ring_id; From patchwork Fri Jun 28 18:23:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55636 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9AA801BB15; Fri, 28 Jun 2019 20:25:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3AC8D1BA56 for ; Fri, 28 Jun 2019 20:25:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIPEFm014540; Fri, 28 Jun 2019 11:25:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=9HUwf3YZ5dcDN44ksUF0KovFQY44NdECnh05FQWW4vk=; b=aioj/m1bPhZCw5le2dFcJJm6/pgsgu21Ot9+/cGPFwfAS9YQ3/RyJDDThGOdsUUExpyK 1KUXvNjserhahqux+duLIXIZPhdcUB1tenKT/hEwOzuaMVd45hJ7HGduLePVZeFArmZA UvaGmNqCZoe3VVImzCRZjt3/wyXiTYVihVfyi0JvJC0Yodn5t5x9ltch0J0mfrwKGIA2 TzhGs2/0dvLU5NCQKFjQeriypA20Y5dltReTR0VeG9UjsR/1HohtEjutGL6edxUtjrCV 9aYRduNLjxY2RL5+vzmJaX4DwnkBo59G/OYEasLxt16r/qnAi7LTuZF0xGiXMGAFJVSM CQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191m9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:14 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:11 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 2D7E13F7040; Fri, 28 Jun 2019 11:25:09 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:42 +0530 Message-ID: <20190628182354.228-32-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 31/42] event/octeontx2: add devargs to modify chunk slots X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add devargs support to modify number of chunk slots. Chunks are used to store event timers, a chunk can be visualised as an array where the last element points to the next chunk and rest of them are used to store events. TIM traverses the list of chunks and enqueues the event timers to SSO. If no argument is passed then a default value of 255 is taken. Example: --dev "0002:0e:00.0,tim_chnk_slots=511" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 11 +++++++++++ drivers/event/octeontx2/otx2_tim_evdev.c | 14 +++++++++++++- drivers/event/octeontx2/otx2_tim_evdev.h | 4 ++++ 3 files changed, 28 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index d24f81629..1e79bd916 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -103,6 +103,17 @@ Runtime Config Options --dev "0002:0e:00.0,tim_disable_npa=1" +- ``TIM modify chunk slots`` + + The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots. + Chunks are used to store event timers, a chunk can be visualised as an array + where the last element points to the next chunk and rest of them are used to + store events. TIM traverses the list of chunks and enqueues the event timers + to SSO. The default value is 255 and the max value is 4095. + For example:: + + --dev "0002:0e:00.0,tim_chnk_slots=1023" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index 4b9816676..c0a692bb5 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -240,7 +240,7 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10); tim_ring->max_tout = rcfg->max_tmo_ns; tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec); - tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ; + tim_ring->chunk_sz = dev->chunk_sz; nb_timers = rcfg->nb_timers; tim_ring->disable_npa = dev->disable_npa; @@ -355,6 +355,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, } #define OTX2_TIM_DISABLE_NPA "tim_disable_npa" +#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots" static void tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) @@ -370,6 +371,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA, &parse_kvargs_flag, &dev->disable_npa); + rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS, + &parse_kvargs_value, &dev->chunk_slots); } void @@ -423,6 +426,15 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) goto mz_free; } + if (dev->chunk_slots && + dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS && + dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) { + dev->chunk_sz = (dev->chunk_slots + 1) * + OTX2_TIM_CHUNK_ALIGNMENT; + } else { + dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ; + } + return; mz_free: diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 0a0a0b4d8..9636d8414 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -22,6 +22,8 @@ #define OTX2_TIM_RING_DEF_CHUNK_SZ (4096) #define OTX2_TIM_CHUNK_ALIGNMENT (16) #define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1) +#define OTX2_TIM_MIN_CHUNK_SLOTS (0x1) +#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE) #define OTX2_TIM_MIN_TMO_TKS (256) enum otx2_tim_clk_src { @@ -54,9 +56,11 @@ struct otx2_tim_evdev { struct rte_eventdev *event_dev; struct otx2_mbox *mbox; uint16_t nb_rings; + uint32_t chunk_sz; uintptr_t bar2; /* Dev args */ uint8_t disable_npa; + uint16_t chunk_slots; }; struct otx2_tim_ring { From patchwork Fri Jun 28 18:23:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55637 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6EBD41BB32; Fri, 28 Jun 2019 20:25:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 664991BA59 for ; Fri, 28 Jun 2019 20:25:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIOlWo013964 for ; Fri, 28 Jun 2019 11:25:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=1jnQ6wwTCxx4YJQgPHMeb+zKHQoI1dUfKOmGnNgwFGs=; b=F4d/u3xMvS70zIOhhLZp/RR1ExHJ/Zg+rg3evwXXsw79mJuZLIOgeOh5ZPoppjc5fX36 W5+I4+pepmc7Q0zrue0aO/Ebv6WBQiFypoM8R3WTWuRNQIaRxUt6iCQl9Ndj+0/m/EIk j97OPuLNdn25GRI4mJiNqHv613NZGceIzCdSIbwpHXN0sRugpQ+KN9Bl+uF8IpSj7mSb tOqmHJRrhs2ptjGKeBhXWgobvTAUHlQCHh9B9LeNQw7jp2afTfr13HzKV/djd+/QhqV2 Dz/AVKMZtlQpYFJ38Tbx5yBjO+1IcIkzCkrkC3/KQYj9VYaEVJp8oB6hvIjnnZQrYKbx dw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191mb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:16 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:13 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:13 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id C491D3F7041; Fri, 28 Jun 2019 11:25:12 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:43 +0530 Message-ID: <20190628182354.228-33-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 32/42] event/octeontx2: add TIM IRQ handlers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Register and implement TIM IRQ handlers for error interrupts Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_evdev_irq.c | 97 ++++++++++++++++++++++++ drivers/event/octeontx2/otx2_tim_evdev.c | 37 +++++++++ drivers/event/octeontx2/otx2_tim_evdev.h | 14 ++++ 3 files changed, 148 insertions(+) diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c index 7379bb17f..a2033646e 100644 --- a/drivers/event/octeontx2/otx2_evdev_irq.c +++ b/drivers/event/octeontx2/otx2_evdev_irq.c @@ -3,6 +3,7 @@ */ #include "otx2_evdev.h" +#include "otx2_tim_evdev.h" static void sso_lf_irq(void *param) @@ -173,3 +174,99 @@ sso_unregister_irqs(const struct rte_eventdev *event_dev) ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base); } } + +static void +tim_lf_irq(void *param) +{ + uintptr_t base = (uintptr_t)param; + uint64_t intr; + uint8_t ring; + + ring = (base >> 12) & 0xFF; + + intr = otx2_read64(base + TIM_LF_NRSPERR_INT); + otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr); + intr = otx2_read64(base + TIM_LF_RAS_INT); + otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr); + + /* Clear interrupt */ + otx2_write64(intr, base + TIM_LF_NRSPERR_INT); + otx2_write64(intr, base + TIM_LF_RAS_INT); +} + +static int +tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff, + uintptr_t base) +{ + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int rc, vec; + + vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec); + /* Enable hw interrupt */ + otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S); + + vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + TIM_LF_RAS_INT); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec); + /* Enable hw interrupt */ + otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S); + + return rc; +} + +static void +tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff, + uintptr_t base) +{ + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int vec; + + vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C); + otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec); + + vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C); + otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec); +} + +int +tim_register_irq(uint16_t ring_id) +{ + struct otx2_tim_evdev *dev = tim_priv_get(); + int rc = -EINVAL; + uintptr_t base; + + if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) { + otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x", + ring_id, dev->tim_msixoff[ring_id]); + goto fail; + } + + base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); + rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base); +fail: + return rc; +} + +void +tim_unregister_irq(uint16_t ring_id) +{ + struct otx2_tim_evdev *dev = tim_priv_get(); + uintptr_t base; + + base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12); + tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base); +} diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index c0a692bb5..8324ded51 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -11,6 +11,24 @@ static struct rte_event_timer_adapter_ops otx2_tim_ops; +static inline int +tim_get_msix_offsets(void) +{ + struct otx2_tim_evdev *dev = tim_priv_get(); + struct otx2_mbox *mbox = dev->mbox; + struct msix_offset_rsp *msix_rsp; + int i, rc; + + /* Get TIM MSIX vector offsets */ + otx2_mbox_alloc_msg_msix_offset(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); + + for (i = 0; i < dev->nb_rings; i++) + dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i]; + + return rc; +} + static void tim_optimze_bkt_param(struct otx2_tim_ring *tim_ring) { @@ -288,6 +306,10 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12); + rc = tim_register_irq(tim_ring->ring_id); + if (rc < 0) + goto chnk_mem_err; + otx2_write64((uint64_t)tim_ring->bkt, tim_ring->base + TIM_LF_RING_BASE); otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA); @@ -316,6 +338,8 @@ otx2_tim_ring_free(struct rte_event_timer_adapter *adptr) if (dev == NULL) return -ENODEV; + tim_unregister_irq(tim_ring->ring_id); + req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox); req->ring = tim_ring->ring_id; @@ -379,6 +403,7 @@ void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) { struct rsrc_attach_req *atch_req; + struct rsrc_detach_req *dtch_req; struct free_rsrcs_rsp *rsrc_cnt; const struct rte_memzone *mz; struct otx2_tim_evdev *dev; @@ -426,6 +451,12 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) goto mz_free; } + rc = tim_get_msix_offsets(); + if (rc < 0) { + otx2_err("Unable to get MSIX offsets for TIM."); + goto detach; + } + if (dev->chunk_slots && dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS && dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) { @@ -437,6 +468,12 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) return; +detach: + dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox); + dtch_req->partial = true; + dtch_req->timlfs = true; + + otx2_mbox_process(dev->mbox); mz_free: rte_memzone_free(mz); } diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 9636d8414..aac7dc711 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -16,6 +16,14 @@ #define TIM_LF_RING_AURA (0x0) #define TIM_LF_RING_BASE (0x130) +#define TIM_LF_NRSPERR_INT (0x200) +#define TIM_LF_NRSPERR_INT_W1S (0x208) +#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210) +#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218) +#define TIM_LF_RAS_INT (0x300) +#define TIM_LF_RAS_INT_W1S (0x308) +#define TIM_LF_RAS_INT_ENA_W1S (0x310) +#define TIM_LF_RAS_INT_ENA_W1C (0x318) #define OTX2_MAX_TIM_RINGS (256) #define OTX2_TIM_MAX_BUCKETS (0xFFFFF) @@ -61,6 +69,8 @@ struct otx2_tim_evdev { /* Dev args */ uint8_t disable_npa; uint16_t chunk_slots; + /* MSIX offsets */ + uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS]; }; struct otx2_tim_ring { @@ -103,4 +113,8 @@ int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev); void otx2_tim_fini(void); +/* TIM IRQ */ +int tim_register_irq(uint16_t ring_id); +void tim_unregister_irq(uint16_t ring_id); + #endif /* __OTX2_TIM_EVDEV_H__ */ From patchwork Fri Jun 28 18:23:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55638 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 75E741BB3E; Fri, 28 Jun 2019 20:25:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 4635C1BA59 for ; Fri, 28 Jun 2019 20:25:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIPII8014547 for ; Fri, 28 Jun 2019 11:25:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Iswh/8zCaBwxGy0V0py+xjrGnSZFncti8IMsdOyM1AU=; b=kAheqx62pkzIMLcs427KEhDjzcRdtl6Ep3ONKvR+HBtFNgIxQ2J6Tfg54gOXAre5Pr60 HjgHZG1ZPYXTpIVUx9LCG/Ew4bNFRcyxfSV9yBR2tzXN32C179jED44nn757y7QZ+8BP xBe1JnPQvtT57axwNi32EtO3tfViPBhSarnlw95DZaV6APRi3C0RsVFCUb2JiY8fjtYM ZFkV9jwqTTIJ0TuSVSZQ+SFqXfPooo6Btps66XPWhebbL6k0jlhcbuZjE5bhibtibZu1 jEPelJCQ5xsmoZS6xTkSqyaZYyOq/kmSpX+06f6fFQOIy4lge6o9yz1vqzm+C/UxMXh5 vQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191mk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:18 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:16 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:16 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id E16A43F7040; Fri, 28 Jun 2019 11:25:14 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:44 +0530 Message-ID: <20190628182354.228-34-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 33/42] event/octeontx2: allow adapters to resize inflight buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add internal SSO functions to allow event adapters to resize SSO buffers that are used to hold in-flight events in DRAM. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 1 + drivers/event/octeontx2/otx2_evdev.c | 31 ++++++++++++++++++++++ drivers/event/octeontx2/otx2_evdev.h | 5 ++++ drivers/event/octeontx2/otx2_evdev_adptr.c | 19 +++++++++++++ drivers/event/octeontx2/otx2_tim_evdev.c | 5 ++++ 6 files changed, 62 insertions(+) create mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index 2290622dd..6f8d9fe2f 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -33,6 +33,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_adptr.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_evdev.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_selftest.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_irq.c diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index ad7f2e084..c709b5e69 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files('otx2_worker.c', 'otx2_worker_dual.c', 'otx2_evdev.c', + 'otx2_evdev_adptr.c', 'otx2_evdev_irq.c', 'otx2_evdev_selftest.c', 'otx2_tim_evdev.c', diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index a1222b3cf..914869b6c 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -529,6 +529,9 @@ sso_xaq_allocate(struct otx2_sso_evdev *dev) xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT; if (dev->xae_cnt) xaq_cnt += dev->xae_cnt / dev->xae_waes; + else if (dev->adptr_xae_cnt) + xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) + + (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); else xaq_cnt += (dev->iue / dev->xae_waes) + (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); @@ -1030,6 +1033,34 @@ sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable) otx2_mbox_process(dev->mbox); } +int +sso_xae_reconfigure(struct rte_eventdev *event_dev) +{ + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct rte_mempool *prev_xaq_pool; + int rc = 0; + + if (event_dev->data->dev_started) + sso_cleanup(event_dev, 0); + + prev_xaq_pool = dev->xaq_pool; + dev->xaq_pool = NULL; + sso_xaq_allocate(dev); + rc = sso_ggrp_alloc_xaq(dev); + if (rc < 0) { + otx2_err("Failed to alloc xaq to ggrp %d", rc); + rte_mempool_free(prev_xaq_pool); + return rc; + } + + rte_mempool_free(prev_xaq_pool); + rte_mb(); + if (event_dev->data->dev_started) + sso_cleanup(event_dev, 1); + + return 0; +} + static int otx2_sso_start(struct rte_eventdev *event_dev) { diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 1e15b7e1c..ba3aae5ba 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -129,6 +129,7 @@ struct otx2_sso_evdev { uint64_t nb_xaq_cfg; rte_iova_t fc_iova; struct rte_mempool *xaq_pool; + uint32_t adptr_xae_cnt; /* Dev args */ uint8_t dual_ws; uint8_t selftest; @@ -243,6 +244,10 @@ uint16_t otx2_ssogws_dual_deq_timeout(void *port, struct rte_event *ev, uint16_t otx2_ssogws_dual_deq_timeout_burst(void *port, struct rte_event ev[], uint16_t nb_events, uint64_t timeout_ticks); + +void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, + uint32_t event_type); +int sso_xae_reconfigure(struct rte_eventdev *event_dev); void sso_fastpath_fns_set(struct rte_eventdev *event_dev); /* Clean up API's */ typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev); diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c new file mode 100644 index 000000000..810722f89 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_evdev.h" + +void +sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type) +{ + switch (event_type) { + case RTE_EVENT_TYPE_TIMER: + { + dev->adptr_xae_cnt += (*(uint64_t *)data); + break; + } + default: + break; + } +} diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index 8324ded51..186c5d483 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -314,6 +314,11 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->base + TIM_LF_RING_BASE); otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA); + /* Update SSO xae count. */ + sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)&nb_timers, + RTE_EVENT_TYPE_TIMER); + sso_xae_reconfigure(dev->event_dev); + return rc; chnk_mem_err: From patchwork Fri Jun 28 18:23:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55639 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 206C91BA90; Fri, 28 Jun 2019 20:25:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 211AE1B9DF for ; Fri, 28 Jun 2019 20:25:19 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILsKW011627 for ; Fri, 28 Jun 2019 11:25:19 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=oOECe3fZqDrstEr5Lq9OWd5ksyt129qaVZ0dK/q+k/w=; b=JEyRy97EgPasF/O7zxm/h+U+Akmnv5yx2d4mGT7M25coco/Z/4Tr3HZa16hapDTp0Tql Q8cPe+u8N9Oyt9hQYtRHt45H/fv3q9vp84vs7IK8zyJjwDVvF5WcNgTt8JHhl5Qq//zc VhfPmfqQbDT3XMpQFfJBYU8f0+CVZfXSt42OsOKI3SttC+i/58AbtAclu6gPLQU+cKY0 58f0lA9Er3BI0dayjodmAfDzRsEE3IVJj5Kzq/BVOFvF2srtDSa/jnhrcpFeXCFcdPs1 SWwdzdY/LtKJTH+SHPimr++EU1b78ekJ7MzuasNGRKvsOp2u19AOO6vKaROJd9EjRtrC Cg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agqd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:19 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:18 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:18 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 1ED3C3F7040; Fri, 28 Jun 2019 11:25:16 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:45 +0530 Message-ID: <20190628182354.228-35-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 34/42] event/octeontx2: add timer adapter info get function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add TIM event timer adapter info get function. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_evdev.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index 186c5d483..f2c14faaa 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -29,6 +29,18 @@ tim_get_msix_offsets(void) return rc; } +static void +otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr, + struct rte_event_timer_adapter_info *adptr_info) +{ + struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; + + adptr_info->max_tmo_ns = tim_ring->max_tout; + adptr_info->min_resolution_ns = tim_ring->tck_nsec; + rte_memcpy(&adptr_info->conf, &adptr->data->conf, + sizeof(struct rte_event_timer_adapter_conf)); +} + static void tim_optimze_bkt_param(struct otx2_tim_ring *tim_ring) { @@ -374,6 +386,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, otx2_tim_ops.init = otx2_tim_ring_create; otx2_tim_ops.uninit = otx2_tim_ring_free; + otx2_tim_ops.get_info = otx2_tim_ring_info_get; /* Store evdev pointer for later use. */ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev; From patchwork Fri Jun 28 18:23:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55640 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 653411BB72; Fri, 28 Jun 2019 20:25:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id C24CB1BA65 for ; Fri, 28 Jun 2019 20:25:23 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIOlWq013964 for ; Fri, 28 Jun 2019 11:25:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=TU8OO9nvcHieTV75oZUhs6cuZcdZMO5BEVtNzxHfXd4=; b=ST176pixXa1Dq8jy25Mdt15hAyovP0Y/jWjb1oBrYYNFvnQ+y3CRki5OFfIvlQUMIy89 MIyLAAj0yRg2irtu4u1XAj3o1S8RSRttJs9CeDDbvA5BnfGkZ+VQv4+QoKILr/uy+zYs M4CAr0Ma/n/VTH0URDPGZVk1IQmS+Dh+ZM1wQlz2b9NSVQ78TSW0BPZGml3wHOoRffbg asyjIpYlOaUfzj+UBHGaWsw7xBo2xDUpBaTfWz1Shm9V64VFPuxD8OCe/uI7H68Ximi2 KwJvT+f8hjfLCAnMmrICZAubld/wtXMq/Rc3xNJ3LglZhEJZ1OpJkmK3a7Gk6/DJeyNc XQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191mu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:22 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:20 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 4A4BD3F7040; Fri, 28 Jun 2019 11:25:19 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:46 +0530 Message-ID: <20190628182354.228-36-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 35/42] event/octeontx2: add TIM bucket operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add TIM bucket operations used for event timer arm and cancel. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/Makefile | 1 + drivers/event/octeontx2/meson.build | 1 + drivers/event/octeontx2/otx2_tim_evdev.h | 36 +++++++ drivers/event/octeontx2/otx2_tim_worker.c | 7 ++ drivers/event/octeontx2/otx2_tim_worker.h | 111 ++++++++++++++++++++++ 5 files changed, 156 insertions(+) create mode 100644 drivers/event/octeontx2/otx2_tim_worker.c create mode 100644 drivers/event/octeontx2/otx2_tim_worker.h diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index 6f8d9fe2f..d01da6b11 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -32,6 +32,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker_dual.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_worker.c +SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_worker.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev_adptr.c SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_tim_evdev.c diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build index c709b5e69..bdb5beed6 100644 --- a/drivers/event/octeontx2/meson.build +++ b/drivers/event/octeontx2/meson.build @@ -9,6 +9,7 @@ sources = files('otx2_worker.c', 'otx2_evdev_irq.c', 'otx2_evdev_selftest.c', 'otx2_tim_evdev.c', + 'otx2_tim_worker.c' ) allow_experimental_apis = true diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index aac7dc711..2be5d5f07 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -25,6 +25,42 @@ #define TIM_LF_RAS_INT_ENA_W1S (0x310) #define TIM_LF_RAS_INT_ENA_W1C (0x318) +#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48) +#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \ + TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1) +#define TIM_BUCKET_W1_S_LOCK (40) +#define TIM_BUCKET_W1_M_LOCK ((1ULL << \ + (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \ + TIM_BUCKET_W1_S_LOCK)) - 1) +#define TIM_BUCKET_W1_S_RSVD (35) +#define TIM_BUCKET_W1_S_BSK (34) +#define TIM_BUCKET_W1_M_BSK ((1ULL << \ + (TIM_BUCKET_W1_S_RSVD - \ + TIM_BUCKET_W1_S_BSK)) - 1) +#define TIM_BUCKET_W1_S_HBT (33) +#define TIM_BUCKET_W1_M_HBT ((1ULL << \ + (TIM_BUCKET_W1_S_BSK - \ + TIM_BUCKET_W1_S_HBT)) - 1) +#define TIM_BUCKET_W1_S_SBT (32) +#define TIM_BUCKET_W1_M_SBT ((1ULL << \ + (TIM_BUCKET_W1_S_HBT - \ + TIM_BUCKET_W1_S_SBT)) - 1) +#define TIM_BUCKET_W1_S_NUM_ENTRIES (0) +#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \ + (TIM_BUCKET_W1_S_SBT - \ + TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1) + +#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN) + +#define TIM_BUCKET_CHUNK_REMAIN \ + (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER) + +#define TIM_BUCKET_LOCK \ + (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK) + +#define TIM_BUCKET_SEMA_WLOCK \ + (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK)) + #define OTX2_MAX_TIM_RINGS (256) #define OTX2_TIM_MAX_BUCKETS (0xFFFFF) #define OTX2_TIM_RING_DEF_CHUNK_SZ (4096) diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c new file mode 100644 index 000000000..29ed1fd5a --- /dev/null +++ b/drivers/event/octeontx2/otx2_tim_worker.c @@ -0,0 +1,7 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_tim_evdev.h" +#include "otx2_tim_worker.h" + diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h new file mode 100644 index 000000000..ccb137d13 --- /dev/null +++ b/drivers/event/octeontx2/otx2_tim_worker.h @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TIM_WORKER_H__ +#define __OTX2_TIM_WORKER_H__ + +#include "otx2_tim_evdev.h" + +static inline int16_t +tim_bkt_fetch_rem(uint64_t w1) +{ + return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) & + TIM_BUCKET_W1_M_CHUNK_REMAINDER; +} + +static inline int16_t +tim_bkt_get_rem(struct otx2_tim_bkt *bktp) +{ + return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE); +} + +static inline void +tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v) +{ + __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED); +} + +static inline void +tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v) +{ + __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED); +} + +static inline uint8_t +tim_bkt_get_hbt(uint64_t w1) +{ + return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT; +} + +static inline uint8_t +tim_bkt_get_bsk(uint64_t w1) +{ + return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK; +} + +static inline uint64_t +tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp) +{ + /* Clear everything except lock. */ + const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK; + + return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL); +} + +static inline uint64_t +tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp) +{ + return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK, + __ATOMIC_ACQUIRE); +} + +static inline uint64_t +tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp) +{ + return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED); +} + +static inline uint64_t +tim_bkt_inc_lock(struct otx2_tim_bkt *bktp) +{ + const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK; + + return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE); +} + +static inline void +tim_bkt_dec_lock(struct otx2_tim_bkt *bktp) +{ + __atomic_add_fetch(&bktp->lock, 0xff, __ATOMIC_RELEASE); +} + +static inline uint32_t +tim_bkt_get_nent(uint64_t w1) +{ + return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) & + TIM_BUCKET_W1_M_NUM_ENTRIES; +} + +static inline void +tim_bkt_inc_nent(struct otx2_tim_bkt *bktp) +{ + __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED); +} + +static inline void +tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v) +{ + __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED); +} + +static inline uint64_t +tim_bkt_clr_nent(struct otx2_tim_bkt *bktp) +{ + const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES << + TIM_BUCKET_W1_S_NUM_ENTRIES); + + return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL); +} + +#endif /* __OTX2_TIM_WORKER_H__ */ From patchwork Fri Jun 28 18:23:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55641 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 52C611BB7A; Fri, 28 Jun 2019 20:25:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id ED6F01BA83 for ; Fri, 28 Jun 2019 20:25:24 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIKhoZ010889 for ; Fri, 28 Jun 2019 11:25:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=BQwxjAA0nBTmNWMKC+5B2YEG+iyfLR02ZmJbjTNp7CI=; b=lqgqHb+GaphQ4ou/LuwOlBm4fVryNTdhuhq6hOWQESQeUVaMiVBmCLnKkL1oMHdao3uv QesF28+asXYqJ+kVoh9mwiA9cMOIGwIiAZtkjDLB61dWgOK+9fW5XWcOdwcpC2T2Oeb2 Qurtl3SQ3xTC+SsjZ/ZbK7qM6BlaZJE96h6Fozc6xtrhJAAjXZqTyiqXN45/eWVoeEQG jzR0fT94TKYocHuFU+Z1A5qFCVktEdJQb5VFUqjrouQ9uEQlHQWyB+l7B34Q0nHWDs8S yX0v4sgizEDVeEQnK0ACVTNPhSpfyOIsPdkR8PrGwh9n0GU6bH4Y5eT6ZTWdOLE/MYqg wQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agqn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:24 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:22 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:22 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 76B6E3F7040; Fri, 28 Jun 2019 11:25:21 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:47 +0530 Message-ID: <20190628182354.228-37-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 36/42] event/octeontx2: add event timer arm routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event timer arm routine. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_evdev.c | 20 +++ drivers/event/octeontx2/otx2_tim_evdev.h | 33 ++++ drivers/event/octeontx2/otx2_tim_worker.c | 77 ++++++++ drivers/event/octeontx2/otx2_tim_worker.h | 204 ++++++++++++++++++++++ 4 files changed, 334 insertions(+) diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index f2c14faaa..f4651c281 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -29,6 +29,23 @@ tim_get_msix_offsets(void) return rc; } +static void +tim_set_fp_ops(struct otx2_tim_ring *tim_ring) +{ + uint8_t prod_flag = !tim_ring->prod_type_sp; + + /* [MOD/AND] [DFB/FB] [SP][MP]*/ + const rte_event_timer_arm_burst_t arm_burst[2][2][2] = { +#define FP(_name, _f3, _f2, _f1, flags) \ + [_f3][_f2][_f1] = otx2_tim_arm_burst_ ## _name, +TIM_ARM_FASTPATH_MODES +#undef FP + }; + + otx2_tim_ops.arm_burst = arm_burst[tim_ring->optimized] + [tim_ring->ena_dfb][prod_flag]; +} + static void otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr, struct rte_event_timer_adapter_info *adptr_info) @@ -326,6 +343,9 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->base + TIM_LF_RING_BASE); otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA); + /* Set fastpath ops. */ + tim_set_fp_ops(tim_ring); + /* Update SSO xae count. */ sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)&nb_timers, RTE_EVENT_TYPE_TIMER); diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 2be5d5f07..01b271507 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -7,6 +7,7 @@ #include #include +#include #include "otx2_dev.h" @@ -70,6 +71,13 @@ #define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE) #define OTX2_TIM_MIN_TMO_TKS (256) +#define OTX2_TIM_SP 0x1 +#define OTX2_TIM_MP 0x2 +#define OTX2_TIM_BKT_AND 0x4 +#define OTX2_TIM_BKT_MOD 0x8 +#define OTX2_TIM_ENA_FB 0x10 +#define OTX2_TIM_ENA_DFB 0x20 + enum otx2_tim_clk_src { OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK, OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0, @@ -95,6 +103,11 @@ struct otx2_tim_bkt { uint64_t pad; } __rte_packed __rte_aligned(32); +struct otx2_tim_ent { + uint64_t w0; + uint64_t wqe; +} __rte_packed; + struct otx2_tim_evdev { struct rte_pci_device *pci_dev; struct rte_eventdev *event_dev; @@ -111,8 +124,10 @@ struct otx2_tim_evdev { struct otx2_tim_ring { uintptr_t base; + struct rte_reciprocal_u64 fast_div; uint16_t nb_chunk_slots; uint32_t nb_bkts; + uint64_t ring_start_cyc; struct otx2_tim_bkt *bkt; struct rte_mempool *chunk_pool; uint64_t tck_int; @@ -142,6 +157,24 @@ tim_priv_get(void) return mz->addr; } +#define TIM_ARM_FASTPATH_MODES \ +FP(mod_sp, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ +FP(mod_mp, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ +FP(mod_fb_sp, 0, 1, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ +FP(mod_fb_mp, 0, 1, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ +FP(and_sp, 1, 0, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ +FP(and_mp, 1, 0, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ +FP(and_fb_sp, 1, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ +FP(and_fb_mp, 1, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ + +#define FP(_name, _f3, _f2, _f1, flags) \ +uint16_t otx2_tim_arm_burst_ ## _name( \ + const struct rte_event_timer_adapter *adptr, \ + struct rte_event_timer **tim, \ + const uint16_t nb_timers); +TIM_ARM_FASTPATH_MODES +#undef FP + int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, uint32_t *caps, const struct rte_event_timer_adapter_ops **ops); diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c index 29ed1fd5a..409575ec4 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.c +++ b/drivers/event/octeontx2/otx2_tim_worker.c @@ -5,3 +5,80 @@ #include "otx2_tim_evdev.h" #include "otx2_tim_worker.h" +static inline int +tim_arm_checks(const struct otx2_tim_ring * const tim_ring, + struct rte_event_timer * const tim) +{ + if (unlikely(tim->state)) { + tim->state = RTE_EVENT_TIMER_ERROR; + rte_errno = EALREADY; + goto fail; + } + + if (unlikely(!tim->timeout_ticks || + tim->timeout_ticks >= tim_ring->nb_bkts)) { + tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE + : RTE_EVENT_TIMER_ERROR_TOOEARLY; + rte_errno = EINVAL; + goto fail; + } + + return 0; + +fail: + return -EINVAL; +} + +static inline void +tim_format_event(const struct rte_event_timer * const tim, + struct otx2_tim_ent * const entry) +{ + entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 | + (tim->ev.event & 0xFFFFFFFFF); + entry->wqe = tim->ev.u64; +} + +static __rte_always_inline uint16_t +tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr, + struct rte_event_timer **tim, + const uint16_t nb_timers, + const uint8_t flags) +{ + struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; + struct otx2_tim_ent entry; + uint16_t index; + int ret; + + for (index = 0; index < nb_timers; index++) { + if (tim_arm_checks(tim_ring, tim[index])) + break; + + tim_format_event(tim[index], &entry); + if (flags & OTX2_TIM_SP) + ret = tim_add_entry_sp(tim_ring, + tim[index]->timeout_ticks, + tim[index], &entry, flags); + if (flags & OTX2_TIM_MP) + ret = tim_add_entry_mp(tim_ring, + tim[index]->timeout_ticks, + tim[index], &entry, flags); + + if (unlikely(ret)) { + rte_errno = -ret; + break; + } + } + + return index; +} + +#define FP(_name, _f3, _f2, _f1, _flags) \ +uint16_t __rte_noinline \ +otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ + struct rte_event_timer **tim, \ + const uint16_t nb_timers) \ +{ \ + return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \ +} +TIM_ARM_FASTPATH_MODES +#undef FP diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h index ccb137d13..a5e0d56bc 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.h +++ b/drivers/event/octeontx2/otx2_tim_worker.h @@ -108,4 +108,208 @@ tim_bkt_clr_nent(struct otx2_tim_bkt *bktp) return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL); } +static __rte_always_inline struct otx2_tim_bkt * +tim_get_target_bucket(struct otx2_tim_ring * const tim_ring, + const uint32_t rel_bkt, const uint8_t flag) +{ + const uint64_t bkt_cyc = rte_rdtsc() - tim_ring->ring_start_cyc; + uint32_t bucket = rte_reciprocal_divide_u64(bkt_cyc, + &tim_ring->fast_div) + rel_bkt; + + if (flag & OTX2_TIM_BKT_MOD) + bucket = bucket % tim_ring->nb_bkts; + if (flag & OTX2_TIM_BKT_AND) + bucket = bucket & (tim_ring->nb_bkts - 1); + + return &tim_ring->bkt[bucket]; +} + +static struct otx2_tim_ent * +tim_clr_bkt(struct otx2_tim_ring * const tim_ring, + struct otx2_tim_bkt * const bkt) +{ + struct otx2_tim_ent *chunk; + struct otx2_tim_ent *pnext; + + chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk); + chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk + + tim_ring->nb_chunk_slots)->w0; + while (chunk) { + pnext = (struct otx2_tim_ent *)(uintptr_t) + ((chunk + tim_ring->nb_chunk_slots)->w0); + rte_mempool_put(tim_ring->chunk_pool, chunk); + chunk = pnext; + } + + return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk; +} + +static struct otx2_tim_ent * +tim_refill_chunk(struct otx2_tim_bkt * const bkt, + struct otx2_tim_ring * const tim_ring) +{ + struct otx2_tim_ent *chunk; + + if (bkt->nb_entry || !bkt->first_chunk) { + if (unlikely(rte_mempool_get(tim_ring->chunk_pool, + (void **)&chunk))) + return NULL; + if (bkt->nb_entry) { + *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t) + bkt->current_chunk) + + tim_ring->nb_chunk_slots) = + (uintptr_t)chunk; + } else { + bkt->first_chunk = (uintptr_t)chunk; + } + } else { + chunk = tim_clr_bkt(tim_ring, bkt); + bkt->first_chunk = (uintptr_t)chunk; + } + *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; + + return chunk; +} + +static struct otx2_tim_ent * +tim_insert_chunk(struct otx2_tim_bkt * const bkt, + struct otx2_tim_ring * const tim_ring) +{ + struct otx2_tim_ent *chunk; + + if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk))) + return NULL; + + *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; + if (bkt->nb_entry) { + *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t) + bkt->current_chunk) + + tim_ring->nb_chunk_slots) = (uintptr_t)chunk; + } else { + bkt->first_chunk = (uintptr_t)chunk; + } + + return chunk; +} + +static __rte_always_inline int +tim_add_entry_sp(struct otx2_tim_ring * const tim_ring, + const uint32_t rel_bkt, + struct rte_event_timer * const tim, + const struct otx2_tim_ent * const pent, + const uint8_t flags) +{ + struct otx2_tim_ent *chunk; + struct otx2_tim_bkt *bkt; + uint64_t lock_sema; + int16_t rem; + + bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags); + +__retry: + /* Get Bucket sema*/ + lock_sema = tim_bkt_fetch_sema(bkt); + + /* Bucket related checks. */ + if (unlikely(tim_bkt_get_hbt(lock_sema))) + goto __retry; + + /* Insert the work. */ + rem = tim_bkt_fetch_rem(lock_sema); + + if (!rem) { + if (flags & OTX2_TIM_ENA_FB) + chunk = tim_refill_chunk(bkt, tim_ring); + if (flags & OTX2_TIM_ENA_DFB) + chunk = tim_insert_chunk(bkt, tim_ring); + + if (unlikely(chunk == NULL)) { + tim_bkt_set_rem(bkt, 0); + tim->impl_opaque[0] = 0; + tim->impl_opaque[1] = 0; + tim->state = RTE_EVENT_TIMER_ERROR; + return -ENOMEM; + } + bkt->current_chunk = (uintptr_t)chunk; + tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - 1); + } else { + chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk; + chunk += tim_ring->nb_chunk_slots - rem; + } + + /* Copy work entry. */ + *chunk = *pent; + + tim_bkt_inc_nent(bkt); + + tim->impl_opaque[0] = (uintptr_t)chunk; + tim->impl_opaque[1] = (uintptr_t)bkt; + tim->state = RTE_EVENT_TIMER_ARMED; + + return 0; +} + +static __rte_always_inline int +tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, + const uint32_t rel_bkt, + struct rte_event_timer * const tim, + const struct otx2_tim_ent * const pent, + const uint8_t flags) +{ + struct otx2_tim_ent *chunk; + struct otx2_tim_bkt *bkt; + uint64_t lock_sema; + int16_t rem; + +__retry: + bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags); + + /* Get Bucket sema*/ + lock_sema = tim_bkt_fetch_sema_lock(bkt); + + /* Bucket related checks. */ + if (unlikely(tim_bkt_get_hbt(lock_sema))) { + tim_bkt_dec_lock(bkt); + goto __retry; + } + + rem = tim_bkt_fetch_rem(lock_sema); + + if (rem < 0) { + /* Goto diff bucket. */ + tim_bkt_dec_lock(bkt); + goto __retry; + } else if (!rem) { + /* Only one thread can be here*/ + if (flags & OTX2_TIM_ENA_FB) + chunk = tim_refill_chunk(bkt, tim_ring); + if (flags & OTX2_TIM_ENA_DFB) + chunk = tim_insert_chunk(bkt, tim_ring); + + if (unlikely(chunk == NULL)) { + tim_bkt_set_rem(bkt, 0); + tim_bkt_dec_lock(bkt); + tim->impl_opaque[0] = 0; + tim->impl_opaque[1] = 0; + tim->state = RTE_EVENT_TIMER_ERROR; + return -ENOMEM; + } + bkt->current_chunk = (uintptr_t)chunk; + tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - 1); + } else { + chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk; + chunk += tim_ring->nb_chunk_slots - rem; + } + + /* Copy work entry. */ + *chunk = *pent; + tim_bkt_dec_lock(bkt); + tim_bkt_inc_nent(bkt); + tim->impl_opaque[0] = (uintptr_t)chunk; + tim->impl_opaque[1] = (uintptr_t)bkt; + tim->state = RTE_EVENT_TIMER_ARMED; + + return 0; +} + #endif /* __OTX2_TIM_WORKER_H__ */ From patchwork Fri Jun 28 18:23:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55642 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 16C541BB83; Fri, 28 Jun 2019 20:25:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3BD841B9F3 for ; Fri, 28 Jun 2019 20:25:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIPEFo014540 for ; Fri, 28 Jun 2019 11:25:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=001zopEcDJCVKugDwn0dHR2VJ5o01JQFwJ+JY624fPc=; b=dFRNNQSNGxQs7ZszKfw2OjG8OoxefjuBhKfXXfIUKQ57Ww8MYKD0vUUo8t7oBQ2fKjrW KS+W6U8bPExuxxj8Hamnv4ZQEK2Wj8RFG76ZR1Rl1IskIk/AOk8QcZ3iwEvZBAEJKXT4 R6PPQj/xHvgfj3iiuJ4AplNpqS6jS9TgBypySfWC9BVXWxz4FVCil4OYTTyno4Df6UFk YxHIxkVRsYTO89AW+hiXHNuqYVPFEgAa6zQpYk0J8GagLyyp/m9iZ2tDTlKhZz9jfD6a pYquqgvK23YcQTvLkjrwktwgT7xZ9rMZNQNsVHSJlB0NUnkzibBit88ucUHDFbvuZvNQ Ug== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191my-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:27 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:25 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:25 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id D1D153F7041; Fri, 28 Jun 2019 11:25:23 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:48 +0530 Message-ID: <20190628182354.228-38-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 37/42] event/octeontx2: add event timer arm timeout burst X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event timer arm timeout burst function. All the timers requested to be armed have the same timeout. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_evdev.c | 9 +++ drivers/event/octeontx2/otx2_tim_evdev.h | 16 ++++ drivers/event/octeontx2/otx2_tim_worker.c | 53 ++++++++++++ drivers/event/octeontx2/otx2_tim_worker.h | 98 +++++++++++++++++++++++ 4 files changed, 176 insertions(+) diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index f4651c281..fabcd3d0a 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -42,8 +42,17 @@ TIM_ARM_FASTPATH_MODES #undef FP }; + const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = { +#define FP(_name, _f2, _f1, flags) \ + [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_ ## _name, +TIM_ARM_TMO_FASTPATH_MODES +#undef FP + }; + otx2_tim_ops.arm_burst = arm_burst[tim_ring->optimized] [tim_ring->ena_dfb][prod_flag]; + otx2_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->optimized] + [tim_ring->ena_dfb]; } static void diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 01b271507..751659719 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -66,6 +66,8 @@ #define OTX2_TIM_MAX_BUCKETS (0xFFFFF) #define OTX2_TIM_RING_DEF_CHUNK_SZ (4096) #define OTX2_TIM_CHUNK_ALIGNMENT (16) +#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \ + OTX2_TIM_CHUNK_ALIGNMENT) #define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1) #define OTX2_TIM_MIN_CHUNK_SLOTS (0x1) #define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE) @@ -175,6 +177,20 @@ uint16_t otx2_tim_arm_burst_ ## _name( \ TIM_ARM_FASTPATH_MODES #undef FP +#define TIM_ARM_TMO_FASTPATH_MODES \ +FP(mod, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB) \ +FP(mod_fb, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB) \ +FP(and, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB) \ +FP(and_fb, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB) \ + +#define FP(_name, _f2, _f1, flags) \ +uint16_t otx2_tim_arm_tmo_tick_burst_ ## _name( \ + const struct rte_event_timer_adapter *adptr, \ + struct rte_event_timer **tim, \ + const uint64_t timeout_tick, const uint16_t nb_timers); +TIM_ARM_TMO_FASTPATH_MODES +#undef FP + int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, uint32_t *caps, const struct rte_event_timer_adapter_ops **ops); diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c index 409575ec4..737b167d1 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.c +++ b/drivers/event/octeontx2/otx2_tim_worker.c @@ -72,6 +72,45 @@ tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr, return index; } +static __rte_always_inline uint16_t +tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr, + struct rte_event_timer **tim, + const uint64_t timeout_tick, + const uint16_t nb_timers, const uint8_t flags) +{ + struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned; + struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; + uint16_t set_timers = 0; + uint16_t arr_idx = 0; + uint16_t idx; + int ret; + + if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) { + const enum rte_event_timer_state state = timeout_tick ? + RTE_EVENT_TIMER_ERROR_TOOLATE : + RTE_EVENT_TIMER_ERROR_TOOEARLY; + for (idx = 0; idx < nb_timers; idx++) + tim[idx]->state = state; + + rte_errno = EINVAL; + return 0; + } + + while (arr_idx < nb_timers) { + for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers); + idx++, arr_idx++) { + tim_format_event(tim[arr_idx], &entry[idx]); + } + ret = tim_add_entry_brst(tim_ring, timeout_tick, + &tim[set_timers], entry, idx, flags); + set_timers += ret; + if (ret != idx) + break; + } + + return set_timers; +} + #define FP(_name, _f3, _f2, _f1, _flags) \ uint16_t __rte_noinline \ otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ @@ -82,3 +121,17 @@ otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ } TIM_ARM_FASTPATH_MODES #undef FP + +#define FP(_name, _f2, _f1, _flags) \ +uint16_t __rte_noinline \ +otx2_tim_arm_tmo_tick_burst_ ## _name( \ + const struct rte_event_timer_adapter *adptr, \ + struct rte_event_timer **tim, \ + const uint64_t timeout_tick, \ + const uint16_t nb_timers) \ +{ \ + return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \ + nb_timers, _flags); \ +} +TIM_ARM_TMO_FASTPATH_MODES +#undef FP diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h index a5e0d56bc..da8c93ff2 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.h +++ b/drivers/event/octeontx2/otx2_tim_worker.h @@ -312,4 +312,102 @@ tim_add_entry_mp(struct otx2_tim_ring * const tim_ring, return 0; } +static inline uint16_t +tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt, + struct otx2_tim_ent *chunk, + struct rte_event_timer ** const tim, + const struct otx2_tim_ent * const ents, + const struct otx2_tim_bkt * const bkt) +{ + for (; index < cpy_lmt; index++) { + *chunk = *(ents + index); + tim[index]->impl_opaque[0] = (uintptr_t)chunk++; + tim[index]->impl_opaque[1] = (uintptr_t)bkt; + tim[index]->state = RTE_EVENT_TIMER_ARMED; + } + + return index; +} + +/* Burst mode functions */ +static inline int +tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, + const uint16_t rel_bkt, + struct rte_event_timer ** const tim, + const struct otx2_tim_ent *ents, + const uint16_t nb_timers, const uint8_t flags) +{ + struct otx2_tim_ent *chunk; + struct otx2_tim_bkt *bkt; + uint16_t chunk_remainder; + uint16_t index = 0; + uint64_t lock_sema; + int16_t rem, crem; + uint8_t lock_cnt; + +__retry: + bkt = tim_get_target_bucket(tim_ring, rel_bkt, flags); + + /* Only one thread beyond this. */ + lock_sema = tim_bkt_inc_lock(bkt); + lock_cnt = (uint8_t) + ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK); + + if (lock_cnt) { + tim_bkt_dec_lock(bkt); + goto __retry; + } + + /* Bucket related checks. */ + if (unlikely(tim_bkt_get_hbt(lock_sema))) { + tim_bkt_dec_lock(bkt); + goto __retry; + } + + chunk_remainder = tim_bkt_fetch_rem(lock_sema); + rem = chunk_remainder - nb_timers; + if (rem < 0) { + crem = tim_ring->nb_chunk_slots - chunk_remainder; + if (chunk_remainder && crem) { + chunk = ((struct otx2_tim_ent *) + (uintptr_t)bkt->current_chunk) + crem; + + index = tim_cpy_wrk(index, chunk_remainder, chunk, tim, + ents, bkt); + tim_bkt_sub_rem(bkt, chunk_remainder); + tim_bkt_add_nent(bkt, chunk_remainder); + } + + if (flags & OTX2_TIM_ENA_FB) + chunk = tim_refill_chunk(bkt, tim_ring); + if (flags & OTX2_TIM_ENA_DFB) + chunk = tim_insert_chunk(bkt, tim_ring); + + if (unlikely(chunk == NULL)) { + tim_bkt_dec_lock(bkt); + rte_errno = ENOMEM; + tim[index]->state = RTE_EVENT_TIMER_ERROR; + return crem; + } + *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0; + bkt->current_chunk = (uintptr_t)chunk; + tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt); + + rem = nb_timers - chunk_remainder; + tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem); + tim_bkt_add_nent(bkt, rem); + } else { + chunk = (struct otx2_tim_ent *)(uintptr_t)bkt->current_chunk; + chunk += (tim_ring->nb_chunk_slots - chunk_remainder); + + tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt); + tim_bkt_sub_rem(bkt, nb_timers); + tim_bkt_add_nent(bkt, nb_timers); + } + + tim_bkt_dec_lock(bkt); + + return nb_timers; +} + #endif /* __OTX2_TIM_WORKER_H__ */ From patchwork Fri Jun 28 18:23:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55643 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFD021BBA1; Fri, 28 Jun 2019 20:25:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 615B11BAE6 for ; Fri, 28 Jun 2019 20:25:29 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILhn5011549 for ; Fri, 28 Jun 2019 11:25:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=v2bEkk8Y7NcqrIvP6z7EgQLimMULxVYvEdphV6+Gz3Q=; b=PWWl2DJot54ULyCHZgIRDJJaT2MwWUzM0QgBguzb/Tpna2xJhZ46Kl0SZgrF0Y1P8DVo l37MTP0TJ5lhUG0P7ptfxrA4fNOVOrFK15Z7QRaBb51ZnTQkuZtwUc85NM6g7pGDv5Jm gevmRJQZRiP59CusFsbDjJmeRf88wdElMVU3JOF8QKFe39dOlhuGL+ACUvqn14ONFvhp Ub/7TLtXwtpjMHIDxI537UkuXDyILpycCbylSoTXwMfCBc4oF1GwuxMOVOBBo8owFKMC beL5/3s0tUkHJZ7CwzAqon5rpYCY4GU3XOc3Jq23AYtwsHV9VKWzxQy1jpUZh8xvbjom oA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agqu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:28 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:27 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:27 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 2C0793F7040; Fri, 28 Jun 2019 11:25:25 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:49 +0530 Message-ID: <20190628182354.228-39-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 38/42] event/octeontx2: add event timer cancel function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add function to cancel event timer that has been armed. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_evdev.c | 1 + drivers/event/octeontx2/otx2_tim_evdev.h | 4 +++ drivers/event/octeontx2/otx2_tim_worker.c | 29 ++++++++++++++++++ drivers/event/octeontx2/otx2_tim_worker.h | 37 +++++++++++++++++++++++ 4 files changed, 71 insertions(+) diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index fabcd3d0a..d95be66c6 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -53,6 +53,7 @@ TIM_ARM_TMO_FASTPATH_MODES [tim_ring->ena_dfb][prod_flag]; otx2_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->optimized] [tim_ring->ena_dfb]; + otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst; } static void diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 751659719..7bdd5c8db 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -191,6 +191,10 @@ uint16_t otx2_tim_arm_tmo_tick_burst_ ## _name( \ TIM_ARM_TMO_FASTPATH_MODES #undef FP +uint16_t otx2_tim_timer_cancel_burst( + const struct rte_event_timer_adapter *adptr, + struct rte_event_timer **tim, const uint16_t nb_timers); + int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags, uint32_t *caps, const struct rte_event_timer_adapter_ops **ops); diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c index 737b167d1..fd1f02630 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.c +++ b/drivers/event/octeontx2/otx2_tim_worker.c @@ -135,3 +135,32 @@ otx2_tim_arm_tmo_tick_burst_ ## _name( \ } TIM_ARM_TMO_FASTPATH_MODES #undef FP + +uint16_t +otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr, + struct rte_event_timer **tim, + const uint16_t nb_timers) +{ + uint16_t index; + int ret; + + RTE_SET_USED(adptr); + for (index = 0; index < nb_timers; index++) { + if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) { + rte_errno = EALREADY; + break; + } + + if (tim[index]->state != RTE_EVENT_TIMER_ARMED) { + rte_errno = EINVAL; + break; + } + ret = tim_rm_entry(tim[index]); + if (ret) { + rte_errno = -ret; + break; + } + } + + return index; +} diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h index da8c93ff2..b193e2cab 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.h +++ b/drivers/event/octeontx2/otx2_tim_worker.h @@ -410,4 +410,41 @@ tim_add_entry_brst(struct otx2_tim_ring * const tim_ring, return nb_timers; } +static int +tim_rm_entry(struct rte_event_timer *tim) +{ + struct otx2_tim_ent *entry; + struct otx2_tim_bkt *bkt; + uint64_t lock_sema; + + if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0) + return -ENOENT; + + entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0]; + if (entry->wqe != tim->ev.u64) { + tim->impl_opaque[0] = 0; + tim->impl_opaque[1] = 0; + return -ENOENT; + } + + bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1]; + lock_sema = tim_bkt_inc_lock(bkt); + if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) { + tim_bkt_dec_lock(bkt); + tim->impl_opaque[0] = 0; + tim->impl_opaque[1] = 0; + return -ENOENT; + } + + entry->w0 = 0; + entry->wqe = 0; + tim_bkt_dec_lock(bkt); + + tim->state = RTE_EVENT_TIMER_CANCELED; + tim->impl_opaque[0] = 0; + tim->impl_opaque[1] = 0; + + return 0; +} + #endif /* __OTX2_TIM_WORKER_H__ */ From patchwork Fri Jun 28 18:23:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55644 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 879681BBD2; Fri, 28 Jun 2019 20:25:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 3F8311BB0C for ; Fri, 28 Jun 2019 20:25:32 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIL8JW011336; Fri, 28 Jun 2019 11:25:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=n+cHMq3fhtSRiJI1StvH+EB/WsUenO80piye+qHcxZs=; b=qN4NNqnxrumQZfxCTnyh3RbpPniwAeAxHePul1wsEnwXao+105riyniDbHg+JePVIxzs FzOhcIegNv2bj0vDvlWrXioZbna2ofBix+cZqTa+f9w/q3d8E45Oqt8MD/Z+j/apppr6 plINCTLw56mLLNju/v5zlhqizfrtt58o21qDDegax1eW6m7sepswHxOl4oQ/yEi6yzf8 rtgbTVfw+4TvycUmfVe6B7k4RyHpk+z+1UWKhF62gWcazALDOQm6ruYcYAHMEwKmJKWP 9j4wI0Q7jkBH07hjUseHYz8uHymGMY7HQCCngI+CK8vgPv/dLnBp7+W28CjyepY5xuJ7 /Q== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agr1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:31 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:30 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:30 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 68C913F7040; Fri, 28 Jun 2019 11:25:28 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:50 +0530 Message-ID: <20190628182354.228-40-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 39/42] event/octeontx2: add event timer stats get and reset X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event timer adapter statistics get and reset functions. Stats are disabled by default and can be enabled through devargs. Example: --dev "0002:0e:00.0,tim_stats_ena=1" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 8 +++ drivers/event/octeontx2/otx2_tim_evdev.c | 55 ++++++++++++++--- drivers/event/octeontx2/otx2_tim_evdev.h | 75 ++++++++++++++++------- drivers/event/octeontx2/otx2_tim_worker.c | 9 ++- 4 files changed, 112 insertions(+), 35 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index 1e79bd916..bbc66558f 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -114,6 +114,14 @@ Runtime Config Options --dev "0002:0e:00.0,tim_chnk_slots=1023" +- ``TIM enable arm/cancel statistics`` + + The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of + event timer adapter. + For example:: + + --dev "0002:0e:00.0,tim_stats_ena=1" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index d95be66c6..af68254f5 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -35,24 +35,26 @@ tim_set_fp_ops(struct otx2_tim_ring *tim_ring) uint8_t prod_flag = !tim_ring->prod_type_sp; /* [MOD/AND] [DFB/FB] [SP][MP]*/ - const rte_event_timer_arm_burst_t arm_burst[2][2][2] = { -#define FP(_name, _f3, _f2, _f1, flags) \ - [_f3][_f2][_f1] = otx2_tim_arm_burst_ ## _name, + const rte_event_timer_arm_burst_t arm_burst[2][2][2][2] = { +#define FP(_name, _f4, _f3, _f2, _f1, flags) \ + [_f4][_f3][_f2][_f1] = otx2_tim_arm_burst_ ## _name, TIM_ARM_FASTPATH_MODES #undef FP }; - const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = { -#define FP(_name, _f2, _f1, flags) \ - [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_ ## _name, + const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2][2] = { +#define FP(_name, _f3, _f2, _f1, flags) \ + [_f3][_f2][_f1] = otx2_tim_arm_tmo_tick_burst_ ## _name, TIM_ARM_TMO_FASTPATH_MODES #undef FP }; - otx2_tim_ops.arm_burst = arm_burst[tim_ring->optimized] - [tim_ring->ena_dfb][prod_flag]; - otx2_tim_ops.arm_tmo_tick_burst = arm_tmo_burst[tim_ring->optimized] - [tim_ring->ena_dfb]; + otx2_tim_ops.arm_burst = + arm_burst[tim_ring->enable_stats][tim_ring->optimized] + [tim_ring->ena_dfb][prod_flag]; + otx2_tim_ops.arm_tmo_tick_burst = + arm_tmo_burst[tim_ring->enable_stats][tim_ring->optimized] + [tim_ring->ena_dfb]; otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst; } @@ -300,6 +302,7 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->chunk_sz = dev->chunk_sz; nb_timers = rcfg->nb_timers; tim_ring->disable_npa = dev->disable_npa; + tim_ring->enable_stats = dev->enable_stats; tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS( tim_ring->chunk_sz); @@ -403,6 +406,30 @@ otx2_tim_ring_free(struct rte_event_timer_adapter *adptr) return 0; } +static int +otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter, + struct rte_event_timer_adapter_stats *stats) +{ + struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv; + uint64_t bkt_cyc = rte_rdtsc() - tim_ring->ring_start_cyc; + + + stats->evtim_exp_count = rte_atomic64_read(&tim_ring->arm_cnt); + stats->ev_enq_count = stats->evtim_exp_count; + stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc, + &tim_ring->fast_div); + return 0; +} + +static int +otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter) +{ + struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv; + + rte_atomic64_clear(&tim_ring->arm_cnt); + return 0; +} + int otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, uint32_t *caps, @@ -418,6 +445,11 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, otx2_tim_ops.uninit = otx2_tim_ring_free; otx2_tim_ops.get_info = otx2_tim_ring_info_get; + if (dev->enable_stats) { + otx2_tim_ops.stats_get = otx2_tim_stats_get; + otx2_tim_ops.stats_reset = otx2_tim_stats_reset; + } + /* Store evdev pointer for later use. */ dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev; *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT; @@ -428,6 +460,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, #define OTX2_TIM_DISABLE_NPA "tim_disable_npa" #define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots" +#define OTX2_TIM_STATS_ENA "tim_stats_ena" static void tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) @@ -445,6 +478,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) &parse_kvargs_flag, &dev->disable_npa); rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS, &parse_kvargs_value, &dev->chunk_slots); + rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag, + &dev->enable_stats); } void diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 7bdd5c8db..c8d16b03f 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -79,6 +79,7 @@ #define OTX2_TIM_BKT_MOD 0x8 #define OTX2_TIM_ENA_FB 0x10 #define OTX2_TIM_ENA_DFB 0x20 +#define OTX2_TIM_ENA_STATS 0x40 enum otx2_tim_clk_src { OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK, @@ -120,6 +121,7 @@ struct otx2_tim_evdev { /* Dev args */ uint8_t disable_npa; uint16_t chunk_slots; + uint8_t enable_stats; /* MSIX offsets */ uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS]; }; @@ -133,7 +135,9 @@ struct otx2_tim_ring { struct otx2_tim_bkt *bkt; struct rte_mempool *chunk_pool; uint64_t tck_int; + rte_atomic64_t arm_cnt; uint8_t prod_type_sp; + uint8_t enable_stats; uint8_t disable_npa; uint8_t optimized; uint8_t ena_dfb; @@ -159,32 +163,57 @@ tim_priv_get(void) return mz->addr; } -#define TIM_ARM_FASTPATH_MODES \ -FP(mod_sp, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ -FP(mod_mp, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ -FP(mod_fb_sp, 0, 1, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ -FP(mod_fb_mp, 0, 1, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ -FP(and_sp, 1, 0, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ -FP(and_mp, 1, 0, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ -FP(and_fb_sp, 1, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ -FP(and_fb_mp, 1, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ - -#define FP(_name, _f3, _f2, _f1, flags) \ -uint16_t otx2_tim_arm_burst_ ## _name( \ - const struct rte_event_timer_adapter *adptr, \ - struct rte_event_timer **tim, \ - const uint16_t nb_timers); +#define TIM_ARM_FASTPATH_MODES \ +FP(mod_sp, 0, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ +FP(mod_mp, 0, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ +FP(mod_fb_sp, 0, 0, 1, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ +FP(mod_fb_mp, 0, 0, 1, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ +FP(and_sp, 0, 1, 0, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ +FP(and_mp, 0, 1, 0, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ +FP(and_fb_sp, 0, 1, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ +FP(and_fb_mp, 0, 1, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ +FP(stats_mod_sp, 1, 0, 0, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \ + OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ +FP(stats_mod_mp, 1, 0, 0, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \ + OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ +FP(stats_mod_fb_sp, 1, 0, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \ + OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ +FP(stats_mod_fb_mp, 1, 0, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \ + OTX2_TIM_ENA_FB | OTX2_TIM_MP) \ +FP(stats_and_sp, 1, 1, 0, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \ + OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \ +FP(stats_and_mp, 1, 1, 0, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \ + OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \ +FP(stats_and_fb_sp, 1, 1, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \ + OTX2_TIM_ENA_FB | OTX2_TIM_SP) \ +FP(stats_and_fb_mp, 1, 1, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \ + OTX2_TIM_ENA_FB | OTX2_TIM_MP) + +#define TIM_ARM_TMO_FASTPATH_MODES \ +FP(mod, 0, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB) \ +FP(mod_fb, 0, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB) \ +FP(and, 0, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB) \ +FP(and_fb, 0, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB) \ +FP(stats_mod, 1, 0, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \ + OTX2_TIM_ENA_DFB) \ +FP(stats_mod_fb, 1, 0, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_MOD | \ + OTX2_TIM_ENA_FB) \ +FP(stats_and, 1, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \ + OTX2_TIM_ENA_DFB) \ +FP(stats_and_fb, 1, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_BKT_AND | \ + OTX2_TIM_ENA_FB) + +#define FP(_name, _f4, _f3, _f2, _f1, flags) \ +uint16_t \ +otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ + struct rte_event_timer **tim, \ + const uint16_t nb_timers); TIM_ARM_FASTPATH_MODES #undef FP -#define TIM_ARM_TMO_FASTPATH_MODES \ -FP(mod, 0, 0, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_DFB) \ -FP(mod_fb, 0, 1, OTX2_TIM_BKT_MOD | OTX2_TIM_ENA_FB) \ -FP(and, 1, 0, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_DFB) \ -FP(and_fb, 1, 1, OTX2_TIM_BKT_AND | OTX2_TIM_ENA_FB) \ - -#define FP(_name, _f2, _f1, flags) \ -uint16_t otx2_tim_arm_tmo_tick_burst_ ## _name( \ +#define FP(_name, _f3, _f2, _f1, flags) \ +uint16_t \ +otx2_tim_arm_tmo_tick_burst_ ## _name( \ const struct rte_event_timer_adapter *adptr, \ struct rte_event_timer **tim, \ const uint64_t timeout_tick, const uint16_t nb_timers); diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c index fd1f02630..feba61cd4 100644 --- a/drivers/event/octeontx2/otx2_tim_worker.c +++ b/drivers/event/octeontx2/otx2_tim_worker.c @@ -69,6 +69,9 @@ tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr, } } + if (flags & OTX2_TIM_ENA_STATS) + rte_atomic64_add(&tim_ring->arm_cnt, index); + return index; } @@ -107,11 +110,13 @@ tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr, if (ret != idx) break; } + if (flags & OTX2_TIM_ENA_STATS) + rte_atomic64_add(&tim_ring->arm_cnt, set_timers); return set_timers; } -#define FP(_name, _f3, _f2, _f1, _flags) \ +#define FP(_name, _f4, _f3, _f2, _f1, _flags) \ uint16_t __rte_noinline \ otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ struct rte_event_timer **tim, \ @@ -122,7 +127,7 @@ otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \ TIM_ARM_FASTPATH_MODES #undef FP -#define FP(_name, _f2, _f1, _flags) \ +#define FP(_name, _f3, _f2, _f1, _flags) \ uint16_t __rte_noinline \ otx2_tim_arm_tmo_tick_burst_ ## _name( \ const struct rte_event_timer_adapter *adptr, \ From patchwork Fri Jun 28 18:23:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55645 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CD9C71BBD8; Fri, 28 Jun 2019 20:25:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id BD3201BB27 for ; Fri, 28 Jun 2019 20:25:34 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILhn7011549 for ; Fri, 28 Jun 2019 11:25:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=J1khgfkhzS5zaSR9xLKFOQQdcCQrnFowhMLxnwCHOm4=; b=CEu0umWymhS8F1EI3wUIYkvLSve13xUpmT19NYpS6OgHvTgYzNzeOqY+dldanl0A0dp9 sjrxpkpVc4c8EX+Xi9gx93uY968g5ld25FX451zAnW/eRhPPodVW8BwkWCE2n0FTBFCY cFLamZEKXGJPmNgtS+qp+DGS3RAkZBmj+J28G3mnla+dFaDN+IubDtb8D+iLNejOMitE FjyNuXILYHVfWZyz+VezvDUbnrVujuQy4U4Y8P7RaET9iobUZGalPLA60aob7Cc7dzQx Prt1e532aWUXK577Dt7vpg3VuecuOnHVxJ/smYzyYKPo68PfPkZchQ+rMb6daYzmQVFD tw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agr7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 11:25:33 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:32 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 5E0293F7040; Fri, 28 Jun 2019 11:25:31 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Fri, 28 Jun 2019 23:53:51 +0530 Message-ID: <20190628182354.228-41-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 40/42] event/octeontx2: add even timer adapter start and stop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add event timer adapter start and stop functions. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/otx2_tim_evdev.c | 66 ++++++++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index af68254f5..cd9a679fb 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -377,6 +377,69 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) return rc; } +static int +otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr) +{ + struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; + struct otx2_tim_evdev *dev = tim_priv_get(); + struct tim_enable_rsp *rsp; + struct tim_ring_req *req; + int rc; + + if (dev == NULL) + return -ENODEV; + + req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox); + req->ring = tim_ring->ring_id; + + rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp); + if (rc < 0) { + tim_err_desc(rc); + goto fail; + } +#ifdef RTE_ARM_EAL_RDTSC_USE_PMU + uint64_t tenns_stmp, tenns_diff; + uint64_t pmu_stmp; + + pmu_stmp = rte_rdtsc(); + asm volatile("mrs %0, cntvct_el0" : "=r" (tenns_stmp)); + + tenns_diff = tenns_stmp - rsp->timestarted; + pmu_stmp = pmu_stmp - (NSEC2TICK(tenns_diff * 10, rte_get_timer_hz())); + tim_ring->ring_start_cyc = pmu_stmp; +#else + tim_ring->ring_start_cyc = rsp->timestarted; +#endif + tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, rte_get_timer_hz()); + tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int); + +fail: + return rc; +} + +static int +otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr) +{ + struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv; + struct otx2_tim_evdev *dev = tim_priv_get(); + struct tim_ring_req *req; + int rc; + + if (dev == NULL) + return -ENODEV; + + req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox); + req->ring = tim_ring->ring_id; + + rc = otx2_mbox_process(dev->mbox); + if (rc < 0) { + tim_err_desc(rc); + rc = -EBUSY; + } + + return rc; +} + static int otx2_tim_ring_free(struct rte_event_timer_adapter *adptr) { @@ -438,11 +501,14 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, struct otx2_tim_evdev *dev = tim_priv_get(); RTE_SET_USED(flags); + if (dev == NULL) return -ENODEV; otx2_tim_ops.init = otx2_tim_ring_create; otx2_tim_ops.uninit = otx2_tim_ring_free; + otx2_tim_ops.start = otx2_tim_ring_start; + otx2_tim_ops.stop = otx2_tim_ring_stop; otx2_tim_ops.get_info = otx2_tim_ring_info_get; if (dev->enable_stats) { From patchwork Fri Jun 28 18:23:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55646 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 408DF1BBDE; Fri, 28 Jun 2019 20:25:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 5D49C1BB3B for ; Fri, 28 Jun 2019 20:25:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SIPEFp014540; Fri, 28 Jun 2019 11:25:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=IIHD9eMYCE21VdzOWuuphp1+o2jigUHrOCeT7c9PvMQ=; b=WoG4nDa6hd3Tghb8oF8+R3ZMuYIAjLQd/cLRuURR0bbJ/A42gXVm1I+ywLhYmg5Rfsq7 q1G0dXPdrnwY6s3k2qakmgqbpwuFUId9icvofkQ1r2yxhP4NqCEOWjO1W9g2eUl7INLs W7IuX4UktcJh3uFw2+qy6vFN3tpOdLeep1YxEhIR/4+Ne7e9o3krRFazgbgcyo0rCMe2 FYidrNSlcCv0QvjfMDNM/AQ3E5E/sNid2GxoQJAVfeS5y1yyP4sQyKjnIxuj5d8tOC7u USprEdWmzVO8nfSg5gtNV2RbeeWR58aYa5nQRZpU6BmzdI474PGXDijFD71pQ4sDYAsz OA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tdkg191nh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:36 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:36 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 10EF03F7041; Fri, 28 Jun 2019 11:25:33 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:52 +0530 Message-ID: <20190628182354.228-42-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 41/42] event/octeontx2: add devargs to limit timer adapters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add devargs to limit the max number of TIM rings reserved on probe. Since, TIM rings are HW resources we can avoid starving other applications by not grabbing all the rings. Example: --dev "0002:0e:00.0,tim_rings_lmt=2" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 10 ++++++++++ drivers/event/octeontx2/otx2_tim_evdev.c | 6 +++++- drivers/event/octeontx2/otx2_tim_evdev.h | 1 + 3 files changed, 16 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index bbc66558f..baa866a1e 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -122,6 +122,16 @@ Runtime Config Options --dev "0002:0e:00.0,tim_stats_ena=1" +- ``TIM limit max rings reserved`` + + The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM + rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW + resources we can avoid starving other applications by not grabbing all the + rings. + For example:: + + --dev "0002:0e:00.0,tim_rings_lmt=5" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index cd9a679fb..c312bd541 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -527,6 +527,7 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, #define OTX2_TIM_DISABLE_NPA "tim_disable_npa" #define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots" #define OTX2_TIM_STATS_ENA "tim_stats_ena" +#define OTX2_TIM_RINGS_LMT "tim_rings_lmt" static void tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) @@ -546,6 +547,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) &parse_kvargs_value, &dev->chunk_slots); rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag, &dev->enable_stats); + rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value, + &dev->min_ring_cnt); } void @@ -583,7 +586,8 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) goto mz_free; } - dev->nb_rings = rsrc_cnt->tim; + dev->nb_rings = dev->min_ring_cnt ? + RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim; if (!dev->nb_rings) { otx2_tim_dbg("No TIM Logical functions provisioned."); diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index c8d16b03f..5af724ef9 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -121,6 +121,7 @@ struct otx2_tim_evdev { /* Dev args */ uint8_t disable_npa; uint16_t chunk_slots; + uint16_t min_ring_cnt; uint8_t enable_stats; /* MSIX offsets */ uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS]; From patchwork Fri Jun 28 18:23:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55647 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 103131BBEB; Fri, 28 Jun 2019 20:25:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 5A9871BB43 for ; Fri, 28 Jun 2019 20:25:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILBYw011342; Fri, 28 Jun 2019 11:25:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=EeQknJPAM41CobT79yUDrztaqnmxVMETt2hAnlB8Ew4=; b=aCUp33Rb0XiOBp3psRnq9ioNHdZXggLTL1Vu3Y9obOW72LAcl5n5i26K4l89JkOUml0L dlfNBklYLtjjXe1Ffuac+zT5LpV9BPBoX8JM3fKkja13FUpRg+PRBYSLf/50CPKMyxCX /WmE3OswPmt7b5fXzZTKKF/7MdEF9eBEGj/WxPCzHhGXnoGxsamC98qOd3V+YDJ+EMow pwVa9g3A4k/2tFOJJQDzGyYr7SyBq+9rh/FPh/INjw2HbNtjeNx4qukBYXFGQdEguAGw fz94sm84tVjPHu8fm+AUf+7Zc4g5m6a9ashhqze4bgZkEbL04XfHNa3lokt3etmX+Fy9 Pw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agrn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:40 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:39 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:39 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id ADA753F7040; Fri, 28 Jun 2019 11:25:37 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:53 +0530 Message-ID: <20190628182354.228-43-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 42/42] event/octeontx2: add devargs to control adapter parameters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add devargs to control each event timer adapter i.e. TIM rings internal parameters uniquely. The following dict format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents default values. Example: --dev "0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 10 +++ drivers/event/octeontx2/otx2_tim_evdev.c | 87 +++++++++++++++++++++++- drivers/event/octeontx2/otx2_tim_evdev.h | 10 +++ 3 files changed, 106 insertions(+), 1 deletion(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index baa866a1e..e5624ba23 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -132,6 +132,16 @@ Runtime Config Options --dev "0002:0e:00.0,tim_rings_lmt=5" +- ``TIM ring control internal parameters`` + + When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to + control each TIM rings internal parameters uniquely. The following dict + format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents + default values. + For Example:: + + --dev "0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index c312bd541..446807606 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -255,7 +255,7 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) struct tim_lf_alloc_req *req; struct tim_lf_alloc_rsp *rsp; uint64_t nb_timers; - int rc; + int i, rc; if (dev == NULL) return -ENODEV; @@ -304,6 +304,18 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->disable_npa = dev->disable_npa; tim_ring->enable_stats = dev->enable_stats; + for (i = 0; i < dev->ring_ctl_cnt ; i++) { + struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i]; + + if (ring_ctl->ring == tim_ring->ring_id) { + tim_ring->chunk_sz = ring_ctl->chunk_slots ? + ((uint32_t)(ring_ctl->chunk_slots + 1) * + OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz; + tim_ring->enable_stats = ring_ctl->enable_stats; + tim_ring->disable_npa = ring_ctl->disable_npa; + } + } + tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS( tim_ring->chunk_sz); tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); @@ -528,6 +540,77 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, #define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots" #define OTX2_TIM_STATS_ENA "tim_stats_ena" #define OTX2_TIM_RINGS_LMT "tim_rings_lmt" +#define OTX2_TIM_RING_CTL "tim_ring_ctl" + +static void +tim_parse_ring_param(char *value, void *opaque) +{ + struct otx2_tim_evdev *dev = opaque; + struct otx2_tim_ctl ring_ctl = {0}; + char *tok = strtok(value, "-"); + uint16_t *val; + + val = (uint16_t *)&ring_ctl; + + if (!strlen(value)) + return; + + while (tok != NULL) { + *val = atoi(tok); + tok = strtok(NULL, "-"); + val++; + } + + if (val != (&ring_ctl.enable_stats + 1)) { + otx2_err( + "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]"); + return; + } + + dev->ring_ctl_cnt++; + dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data, + sizeof(struct otx2_tim_ctl), 0); + dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl; +} + +static void +tim_parse_ring_ctl_list(const char *value, void *opaque) +{ + char *s = strdup(value); + char *start = NULL; + char *end = NULL; + char *f = s; + + while (*s) { + if (*s == '[') + start = s; + else if (*s == ']') + end = s; + + if (start < end && *start) { + *end = 0; + tim_parse_ring_param(start + 1, opaque); + start = end; + s = end; + } + s++; + } + + free(f); +} + +static int +tim_parse_kvargs_dict(const char *key, const char *value, void *opaque) +{ + RTE_SET_USED(key); + + /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ',' + * isn't allowed. 0 represents default. + */ + tim_parse_ring_ctl_list(value, opaque); + + return 0; +} static void tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) @@ -549,6 +632,8 @@ tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) &dev->enable_stats); rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value, &dev->min_ring_cnt); + rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL, + &tim_parse_kvargs_dict, &dev); } void diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index 5af724ef9..eec0189c1 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -111,6 +111,13 @@ struct otx2_tim_ent { uint64_t wqe; } __rte_packed; +struct otx2_tim_ctl { + uint16_t ring; + uint16_t chunk_slots; + uint16_t disable_npa; + uint16_t enable_stats; +}; + struct otx2_tim_evdev { struct rte_pci_device *pci_dev; struct rte_eventdev *event_dev; @@ -123,6 +130,9 @@ struct otx2_tim_evdev { uint16_t chunk_slots; uint16_t min_ring_cnt; uint8_t enable_stats; + uint16_t ring_ctl_cnt; + struct otx2_tim_ctl *ring_ctl_data; + /* HW const */ /* MSIX offsets */ uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS]; };