From patchwork Fri Apr 19 06:43:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139534 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 11DCE43EAC; Fri, 19 Apr 2024 08:43:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9EC70402D0; Fri, 19 Apr 2024 08:43:29 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5DB84402D0 for ; Fri, 19 Apr 2024 08:43:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILYmcS010585; Thu, 18 Apr 2024 23:43:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=QzcoaOH0E1c/LE6mRk0SKwXq0LEGBs9fRh039ohMhL8=; b=M+B c104pxDw55MSvW9fD4Jgqb/+WCqKQl5lbAaJHTETPeGXYvuPmQ52Amab6RuNLS0+ UYoje7DHK3MmDOFcUZ260VEpNFEF5HedhqkzGQqE86UrHZz8lOL2s9U8twjSKWTy R+zrBV3Klb/DPgPLNWS7TslB1W/FW/Ycv21vRs77FC5F9B/Df83wS39r4FJ+7NkE XmVQanCB5QDJD/86c4wK7TlmTk9z1g+ch0gsHdkSGtjrE5guQinvJ/+GmdAQ6oZv U/siFu29h6dwBqJCmQVeZDp/UTendr/WtigaLGFGP5DBrlQQcsZhVPipEg9wl9zD WdGwWs2UZzbT03zzf/w== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8sm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:27 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:26 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id C85685B6933; Thu, 18 Apr 2024 23:43:23 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Gowrishankar Muthukrishnan , "Vidya Sagar Velumuri" , Subject: [PATCH v3 1/7] dma/odm: add framework for ODM DMA device Date: Fri, 19 Apr 2024 12:13:13 +0530 Message-ID: <20240419064319.149-2-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jQJPpKKIGCYDo9bGlX0xIay_CBerLHQZ X-Proofpoint-GUID: jQJPpKKIGCYDo9bGlX0xIay_CBerLHQZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add framework for Odyssey ODM DMA device. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- MAINTAINERS | 6 +++ drivers/dma/meson.build | 1 + drivers/dma/odm/meson.build | 14 +++++++ drivers/dma/odm/odm.h | 29 ++++++++++++++ drivers/dma/odm/odm_dmadev.c | 74 ++++++++++++++++++++++++++++++++++++ 5 files changed, 124 insertions(+) create mode 100644 drivers/dma/odm/meson.build create mode 100644 drivers/dma/odm/odm.h create mode 100644 drivers/dma/odm/odm_dmadev.c diff --git a/MAINTAINERS b/MAINTAINERS index 7abb3aee49..b8d2f7b3d8 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1268,6 +1268,12 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl F: drivers/dma/cnxk/ F: doc/guides/dmadevs/cnxk.rst +Marvell Odyssey ODM DMA +M: Gowrishankar Muthukrishnan +M: Vidya Sagar Velumuri +T: git://dpdk.org/next/dpdk-next-net-mrvl +F: drivers/dma/odm/ + NXP DPAA DMA M: Gagandeep Singh M: Sachin Saxena diff --git a/drivers/dma/meson.build b/drivers/dma/meson.build index 582654ea1b..358132759a 100644 --- a/drivers/dma/meson.build +++ b/drivers/dma/meson.build @@ -8,6 +8,7 @@ drivers = [ 'hisilicon', 'idxd', 'ioat', + 'odm', 'skeleton', ] std_deps = ['dmadev'] diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build new file mode 100644 index 0000000000..227b10c890 --- /dev/null +++ b/drivers/dma/odm/meson.build @@ -0,0 +1,14 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2024 Marvell. + +if not is_linux or not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on 64-bit Linux' + subdir_done() +endif + +deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci'] + +sources = files('odm_dmadev.c') + +pmd_supports_disable_iova_as_pa = true diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h new file mode 100644 index 0000000000..aeeb6f9e9a --- /dev/null +++ b/drivers/dma/odm/odm.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#ifndef _ODM_H_ +#define _ODM_H_ + +#include + +extern int odm_logtype; + +#define odm_err(...) \ + rte_log(RTE_LOG_ERR, odm_logtype, \ + RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ + RTE_FMT_TAIL(__VA_ARGS__, ))) +#define odm_info(...) \ + rte_log(RTE_LOG_INFO, odm_logtype, \ + RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ + RTE_FMT_TAIL(__VA_ARGS__, ))) + +struct __rte_cache_aligned odm_dev { + struct rte_pci_device *pci_dev; + uint8_t *rbase; + uint16_t vfid; + uint8_t max_qs; + uint8_t num_qs; +}; + +#endif /* _ODM_H_ */ diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c new file mode 100644 index 0000000000..cc3342cf7b --- /dev/null +++ b/drivers/dma/odm/odm_dmadev.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#include + +#include +#include +#include +#include +#include +#include + +#include "odm.h" + +#define PCI_VENDOR_ID_CAVIUM 0x177D +#define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C +#define PCI_DRIVER_NAME dma_odm + +static int +odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) +{ + char name[RTE_DEV_NAME_MAX_LEN]; + struct odm_dev *odm = NULL; + struct rte_dma_dev *dmadev; + + if (!pci_dev->mem_resource[0].addr) + return -ENODEV; + + memset(name, 0, sizeof(name)); + rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); + + dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*odm)); + if (dmadev == NULL) { + odm_err("DMA device allocation failed for %s", name); + return -ENOMEM; + } + + odm_info("DMA device %s probed", name); + + return 0; +} + +static int +odm_dmadev_remove(struct rte_pci_device *pci_dev) +{ + char name[RTE_DEV_NAME_MAX_LEN]; + + memset(name, 0, sizeof(name)); + rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); + + return rte_dma_pmd_release(name); +} + +static const struct rte_pci_id odm_dma_pci_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_ODYSSEY_ODM_VF) + }, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver odm_dmadev = { + .id_table = odm_dma_pci_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + .probe = odm_dmadev_probe, + .remove = odm_dmadev_remove, +}; + +RTE_PMD_REGISTER_PCI(PCI_DRIVER_NAME, odm_dmadev); +RTE_PMD_REGISTER_PCI_TABLE(PCI_DRIVER_NAME, odm_dma_pci_map); +RTE_PMD_REGISTER_KMOD_DEP(PCI_DRIVER_NAME, "vfio-pci"); +RTE_LOG_REGISTER_DEFAULT(odm_logtype, NOTICE); From patchwork Fri Apr 19 06:43:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139535 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81B3943EAC; Fri, 19 Apr 2024 08:43:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20333402F2; Fri, 19 Apr 2024 08:43:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 33481402F2 for ; Fri, 19 Apr 2024 08:43:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILWVxZ008209; Thu, 18 Apr 2024 23:43:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=PRstuv4EVxhyeCwDL3JKaUbzJyl6lJrzq7BGhHnYDVw=; b=VNb Xxhv26YHTNNk485jD2xeGcr+MVuCGIOP1M0XK944iR6HuiKz6aWppVcGUxAkqQW4 epusOhweIopw+Lt7YMrdW0bWfBbKms0XIuTInkZnGTvGNPRSaYoqDXitllvsQVOy WWVbc1rRGHTnrgzDpZqm0JA8zxwUOdPxppXix9xrL/3ZdF69Ga7+2zq/azXnrysa 9MkWuZHujIc59fRdPIlUgz2DKelGJI2K7Mymj63dKmhnn8hOTBaQzEQywO1p+4bO p+azn8gS02o/5DNwT7pJ1dMexgQcr7WACG538owcyr/m+BW8JQd62rHbfn3wKe3C Vq/Ly9fDuCToSDFNL6w== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8t5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:30 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:29 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id DF83B5B6933; Thu, 18 Apr 2024 23:43:26 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Gowrishankar Muthukrishnan , "Vidya Sagar Velumuri" , Subject: [PATCH v3 2/7] dma/odm: add hardware defines Date: Fri, 19 Apr 2024 12:13:14 +0530 Message-ID: <20240419064319.149-3-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: b58zVDvJh9QCd-m0jnYVgV4LCfyjutMf X-Proofpoint-GUID: b58zVDvJh9QCd-m0jnYVgV4LCfyjutMf X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add ODM registers and structures. Add mailbox structs as well. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- drivers/dma/odm/odm.h | 116 +++++++++++++++++++++++++++++++++++++ drivers/dma/odm/odm_priv.h | 49 ++++++++++++++++ 2 files changed, 165 insertions(+) create mode 100644 drivers/dma/odm/odm_priv.h diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h index aeeb6f9e9a..7564ffbed4 100644 --- a/drivers/dma/odm/odm.h +++ b/drivers/dma/odm/odm.h @@ -9,6 +9,47 @@ extern int odm_logtype; +/* ODM VF register offsets from VF_BAR0 */ +#define ODM_VDMA_EN(x) (0x00 | (x << 3)) +#define ODM_VDMA_REQQ_CTL(x) (0x80 | (x << 3)) +#define ODM_VDMA_DBELL(x) (0x100 | (x << 3)) +#define ODM_VDMA_RING_CFG(x) (0x180 | (x << 3)) +#define ODM_VDMA_IRING_BADDR(x) (0x200 | (x << 3)) +#define ODM_VDMA_CRING_BADDR(x) (0x280 | (x << 3)) +#define ODM_VDMA_COUNTS(x) (0x300 | (x << 3)) +#define ODM_VDMA_IRING_NADDR(x) (0x380 | (x << 3)) +#define ODM_VDMA_CRING_NADDR(x) (0x400 | (x << 3)) +#define ODM_VDMA_IRING_DBG(x) (0x480 | (x << 3)) +#define ODM_VDMA_CNT(x) (0x580 | (x << 3)) +#define ODM_VF_INT (0x1000) +#define ODM_VF_INT_W1S (0x1008) +#define ODM_VF_INT_ENA_W1C (0x1010) +#define ODM_VF_INT_ENA_W1S (0x1018) +#define ODM_MBOX_VF_PF_DATA(i) (0x2000 | (i << 3)) + +#define ODM_MBOX_RETRY_CNT (0xfffffff) +#define ODM_MBOX_ERR_CODE_MAX (0xf) +#define ODM_IRING_IDLE_WAIT_CNT (0xfffffff) + +/** + * Enumeration odm_hdr_xtype_e + * + * ODM Transfer Type Enumeration + * Enumerates the pointer type in ODM_DMA_INSTR_HDR_S[XTYPE] + */ +#define ODM_XTYPE_INTERNAL 2 +#define ODM_XTYPE_FILL0 4 +#define ODM_XTYPE_FILL1 5 + +/** + * ODM Header completion type enumeration + * Enumerates the completion type in ODM_DMA_INSTR_HDR_S[CT] + */ +#define ODM_HDR_CT_CW_CA 0x0 +#define ODM_HDR_CT_CW_NC 0x1 + +#define ODM_MAX_QUEUES_PER_DEV 16 + #define odm_err(...) \ rte_log(RTE_LOG_ERR, odm_logtype, \ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ @@ -18,6 +59,81 @@ extern int odm_logtype; RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ RTE_FMT_TAIL(__VA_ARGS__, ))) +/** + * Structure odm_instr_hdr_s for ODM + * + * ODM DMA Instruction Header Format + */ +union odm_instr_hdr_s { + uint64_t u; + struct odm_instr_hdr { + uint64_t nfst : 3; + uint64_t reserved_3 : 1; + uint64_t nlst : 3; + uint64_t reserved_7_9 : 3; + uint64_t ct : 2; + uint64_t stse : 1; + uint64_t reserved_13_28 : 16; + uint64_t sts : 1; + uint64_t reserved_30_49 : 20; + uint64_t xtype : 3; + uint64_t reserved_53_63 : 11; + } s; +}; + +/** + * ODM Completion Entry Structure + * + */ +union odm_cmpl_ent_s { + uint32_t u; + struct odm_cmpl_ent { + uint32_t cmp_code : 8; + uint32_t rsvd : 23; + uint32_t valid : 1; + } s; +}; + +/** + * ODM DMA Ring Configuration Register + */ +union odm_vdma_ring_cfg_s { + uint64_t u; + struct { + uint64_t isize : 8; + uint64_t rsvd_8_15 : 8; + uint64_t csize : 8; + uint64_t rsvd_24_63 : 40; + } s; +}; + +/** + * ODM DMA Instruction Ring DBG + */ +union odm_vdma_iring_dbg_s { + uint64_t u; + struct { + uint64_t dbell_cnt : 32; + uint64_t offset : 16; + uint64_t rsvd_48_62 : 15; + uint64_t iwbusy : 1; + } s; +}; + +/** + * ODM DMA Counts + */ +union odm_vdma_counts_s { + uint64_t u; + struct { + uint64_t dbell : 32; + uint64_t buf_used_cnt : 9; + uint64_t rsvd_41_43 : 3; + uint64_t rsvd_buf_used_cnt : 3; + uint64_t rsvd_47_63 : 17; + } s; +}; + struct __rte_cache_aligned odm_dev { struct rte_pci_device *pci_dev; uint8_t *rbase; diff --git a/drivers/dma/odm/odm_priv.h b/drivers/dma/odm/odm_priv.h new file mode 100644 index 0000000000..1878f4d9a6 --- /dev/null +++ b/drivers/dma/odm/odm_priv.h @@ -0,0 +1,49 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#ifndef _ODM_PRIV_H_ +#define _ODM_PRIV_H_ + +#define ODM_MAX_VFS 16 +#define ODM_MAX_QUEUES 32 + +#define ODM_CMD_QUEUE_SIZE 4096 + +#define ODM_DEV_INIT 0x1 +#define ODM_DEV_CLOSE 0x2 +#define ODM_QUEUE_OPEN 0x3 +#define ODM_QUEUE_CLOSE 0x4 +#define ODM_REG_DUMP 0x5 + +struct odm_mbox_dev_msg { + /* Response code */ + uint64_t rsp : 8; + /* Number of VFs */ + uint64_t nvfs : 2; + /* Error code */ + uint64_t err : 6; + /* Reserved */ + uint64_t rsvd_16_63 : 48; +}; + +struct odm_mbox_queue_msg { + /* Command code */ + uint64_t cmd : 8; + /* VF ID to configure */ + uint64_t vfid : 8; + /* Queue index in the VF */ + uint64_t qidx : 8; + /* Reserved */ + uint64_t rsvd_24_63 : 40; +}; + +union odm_mbox_msg { + uint64_t u[2]; + struct { + struct odm_mbox_dev_msg d; + struct odm_mbox_queue_msg q; + }; +}; + +#endif /* _ODM_PRIV_H_ */ From patchwork Fri Apr 19 06:43:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139536 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 232F143EAC; Fri, 19 Apr 2024 08:43:45 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 919334067B; Fri, 19 Apr 2024 08:43:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BD48E4067B for ; Fri, 19 Apr 2024 08:43:34 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILYlnP010535; Thu, 18 Apr 2024 23:43:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=T7W4R3WQAcmYT0v5Sgs8JWFuRm5unoc8OuWUtlLLAh8=; b=bi3 IF+ewYH4KMjzF+G+NzEnUIUkIBH1cg9h25n8QHDIp4TaLy379er1Cmkmp9b6eQlA RXH5aRQzgWl6tWjlc77MF+9Xmrs+LgbqRopMp4ETHVtfBvEV2AFCBLAe16MBu03X ZPUV3vSVe+Kfl5rTDz7ESoAhrcysUzbTH0wU5rGyV0sfL6H73JmR5tG3n3APciG/ sFy5cecDFkHhtmfLTDwMLdTvbxen6NCo/VLlFuFzk7rRElG/GkqyZg4vpqWeXAGj ClCUH/835vX4G8KCZjjmR0AUDBFKUdIxMlLn5azbpvnybmaZmNWM8H3Vo7VWRz/o d1YlEuRInOEMYYXLKHA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8th-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:33 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:32 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id F3F625B6933; Thu, 18 Apr 2024 23:43:29 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Gowrishankar Muthukrishnan , "Vidya Sagar Velumuri" , Subject: [PATCH v3 3/7] dma/odm: add dev init and fini Date: Fri, 19 Apr 2024 12:13:15 +0530 Message-ID: <20240419064319.149-4-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: K29XywqRvd-EgDHnlTkktGUIk6_PdVp9 X-Proofpoint-GUID: K29XywqRvd-EgDHnlTkktGUIk6_PdVp9 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gowrishankar Muthukrishnan Add ODM device init and fini. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- drivers/dma/odm/meson.build | 2 +- drivers/dma/odm/odm.c | 97 ++++++++++++++++++++++++++++++++++++ drivers/dma/odm/odm.h | 10 ++++ drivers/dma/odm/odm_dmadev.c | 13 +++++ 4 files changed, 121 insertions(+), 1 deletion(-) create mode 100644 drivers/dma/odm/odm.c diff --git a/drivers/dma/odm/meson.build b/drivers/dma/odm/meson.build index 227b10c890..d597762d37 100644 --- a/drivers/dma/odm/meson.build +++ b/drivers/dma/odm/meson.build @@ -9,6 +9,6 @@ endif deps += ['bus_pci', 'dmadev', 'eal', 'mempool', 'pci'] -sources = files('odm_dmadev.c') +sources = files('odm_dmadev.c', 'odm.c') pmd_supports_disable_iova_as_pa = true diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c new file mode 100644 index 0000000000..c0963da451 --- /dev/null +++ b/drivers/dma/odm/odm.c @@ -0,0 +1,97 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2024 Marvell. + */ + +#include + +#include + +#include + +#include "odm.h" +#include "odm_priv.h" + +static void +odm_vchan_resc_free(struct odm_dev *odm, int qno) +{ + RTE_SET_USED(odm); + RTE_SET_USED(qno); +} + +static int +send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg *rsp) +{ + int retry_cnt = ODM_MBOX_RETRY_CNT; + union odm_mbox_msg pf_msg; + + msg->d.err = ODM_MBOX_ERR_CODE_MAX; + odm_write64(msg->u[0], odm->rbase + ODM_MBOX_VF_PF_DATA(0)); + odm_write64(msg->u[1], odm->rbase + ODM_MBOX_VF_PF_DATA(1)); + + pf_msg.u[0] = 0; + pf_msg.u[1] = 0; + pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0)); + + while (pf_msg.d.rsp == 0 && retry_cnt > 0) { + pf_msg.u[0] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(0)); + --retry_cnt; + } + + if (retry_cnt <= 0) + return -EBADE; + + pf_msg.u[1] = odm_read64(odm->rbase + ODM_MBOX_VF_PF_DATA(1)); + + if (rsp) { + rsp->u[0] = pf_msg.u[0]; + rsp->u[1] = pf_msg.u[1]; + } + + if (pf_msg.d.rsp == msg->d.err && pf_msg.d.err != 0) + return -EBADE; + + return 0; +} + +int +odm_dev_init(struct odm_dev *odm) +{ + struct rte_pci_device *pci_dev = odm->pci_dev; + union odm_mbox_msg mbox_msg; + uint16_t vfid; + int rc; + + odm->rbase = pci_dev->mem_resource[0].addr; + vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7); + vfid -= 1; + odm->vfid = vfid; + odm->num_qs = 0; + + mbox_msg.u[0] = 0; + mbox_msg.u[1] = 0; + mbox_msg.q.vfid = odm->vfid; + mbox_msg.q.cmd = ODM_DEV_INIT; + rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg); + if (!rc) + odm->max_qs = 1 << (4 - mbox_msg.d.nvfs); + + return rc; +} + +int +odm_dev_fini(struct odm_dev *odm) +{ + union odm_mbox_msg mbox_msg; + int qno, rc = 0; + + mbox_msg.u[0] = 0; + mbox_msg.u[1] = 0; + mbox_msg.q.vfid = odm->vfid; + mbox_msg.q.cmd = ODM_DEV_CLOSE; + rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg); + + for (qno = 0; qno < odm->num_qs; qno++) + odm_vchan_resc_free(odm, qno); + + return rc; +} diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h index 7564ffbed4..9fd3e30ad8 100644 --- a/drivers/dma/odm/odm.h +++ b/drivers/dma/odm/odm.h @@ -5,6 +5,10 @@ #ifndef _ODM_H_ #define _ODM_H_ +#include + +#include +#include #include extern int odm_logtype; @@ -50,6 +54,9 @@ extern int odm_logtype; #define ODM_MAX_QUEUES_PER_DEV 16 +#define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr)) +#define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr)) + #define odm_err(...) \ rte_log(RTE_LOG_ERR, odm_logtype, \ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ @@ -142,4 +149,7 @@ struct __rte_cache_aligned odm_dev { uint8_t num_qs; }; +int odm_dev_init(struct odm_dev *odm); +int odm_dev_fini(struct odm_dev *odm); + #endif /* _ODM_H_ */ diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c index cc3342cf7b..bef335c10c 100644 --- a/drivers/dma/odm/odm_dmadev.c +++ b/drivers/dma/odm/odm_dmadev.c @@ -23,6 +23,7 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev char name[RTE_DEV_NAME_MAX_LEN]; struct odm_dev *odm = NULL; struct rte_dma_dev *dmadev; + int rc; if (!pci_dev->mem_resource[0].addr) return -ENODEV; @@ -37,8 +38,20 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev } odm_info("DMA device %s probed", name); + odm = dmadev->data->dev_private; + + odm->pci_dev = pci_dev; + + rc = odm_dev_init(odm); + if (rc < 0) + goto dma_pmd_release; return 0; + +dma_pmd_release: + rte_dma_pmd_release(name); + + return rc; } static int From patchwork Fri Apr 19 06:43:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139537 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D22543EAC; Fri, 19 Apr 2024 08:43:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E64DF40684; Fri, 19 Apr 2024 08:43:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4D89740684 for ; Fri, 19 Apr 2024 08:43:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILWVxe008209; Thu, 18 Apr 2024 23:43:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=7RnWzbs+Laq/Ex0hOXJgPWVdWT95NEevwe+xgdgPVxM=; b=ciD j3sGdFrLkNe0HccopMAVwYlUOzol05R7re/PGZ0NftRTUhhA4sXz/bPbmOo5cvcy 4Gt1BpSZ8HIm3VPWDEfrqjqRs6JKl/1Dy3bHFypBh41onMdpmPO3Kvwm4vUJU2Pi 619eJpia2DbRHCghDZyovKe2bHZgpeXcdXf3X/kHhFLa0A7C83zWPpE64UNpvzj6 2W8c2+ICef9pwErOFl2RxtWbxGYo18mYnD5o1bSdnatZGWP3yeq/ei2UiDSjJ09I rfV3t3mc+qAmS7hHOkM9TJHE/qiN6vjaTvBYN422OcSiivcuOLCzFKE8K6NGS+nM 07BhwOmNsWn9LjwGvNw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8u0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:36 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:35 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id 144A15B6933; Thu, 18 Apr 2024 23:43:32 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Gowrishankar Muthukrishnan , "Vidya Sagar Velumuri" , Subject: [PATCH v3 4/7] dma/odm: add device ops Date: Fri, 19 Apr 2024 12:13:16 +0530 Message-ID: <20240419064319.149-5-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: xQOV8aq78FYSHApbThyVd2XLczqBbyeB X-Proofpoint-GUID: xQOV8aq78FYSHApbThyVd2XLczqBbyeB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gowrishankar Muthukrishnan Add DMA device control ops. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- drivers/dma/odm/odm.c | 144 ++++++++++++++++++++++++++++++++++- drivers/dma/odm/odm.h | 58 ++++++++++++++ drivers/dma/odm/odm_dmadev.c | 85 +++++++++++++++++++++ 3 files changed, 285 insertions(+), 2 deletions(-) diff --git a/drivers/dma/odm/odm.c b/drivers/dma/odm/odm.c index c0963da451..6094ace9fd 100644 --- a/drivers/dma/odm/odm.c +++ b/drivers/dma/odm/odm.c @@ -7,6 +7,7 @@ #include #include +#include #include "odm.h" #include "odm_priv.h" @@ -14,8 +15,15 @@ static void odm_vchan_resc_free(struct odm_dev *odm, int qno) { - RTE_SET_USED(odm); - RTE_SET_USED(qno); + struct odm_queue *vq = &odm->vq[qno]; + + rte_memzone_free(vq->iring_mz); + rte_memzone_free(vq->cring_mz); + rte_free(vq->extra_ins_sz); + + vq->iring_mz = NULL; + vq->cring_mz = NULL; + vq->extra_ins_sz = NULL; } static int @@ -53,6 +61,138 @@ send_mbox_to_pf(struct odm_dev *odm, union odm_mbox_msg *msg, union odm_mbox_msg return 0; } +static int +odm_queue_ring_config(struct odm_dev *odm, int vchan, int isize, int csize) +{ + union odm_vdma_ring_cfg_s ring_cfg = {0}; + struct odm_queue *vq = &odm->vq[vchan]; + + if (vq->iring_mz == NULL || vq->cring_mz == NULL) + return -EINVAL; + + ring_cfg.s.isize = (isize / 1024) - 1; + ring_cfg.s.csize = (csize / 1024) - 1; + + odm_write64(ring_cfg.u, odm->rbase + ODM_VDMA_RING_CFG(vchan)); + odm_write64(vq->iring_mz->iova, odm->rbase + ODM_VDMA_IRING_BADDR(vchan)); + odm_write64(vq->cring_mz->iova, odm->rbase + ODM_VDMA_CRING_BADDR(vchan)); + + return 0; +} + +int +odm_enable(struct odm_dev *odm) +{ + struct odm_queue *vq; + int qno, rc = 0; + + for (qno = 0; qno < odm->num_qs; qno++) { + vq = &odm->vq[qno]; + + vq->desc_idx = vq->stats.completed_offset; + vq->pending_submit_len = 0; + vq->pending_submit_cnt = 0; + vq->iring_head = 0; + vq->cring_head = 0; + vq->ins_ring_head = 0; + vq->iring_sz_available = vq->iring_max_words; + + rc = odm_queue_ring_config(odm, qno, vq->iring_max_words * 8, + vq->cring_max_entry * 4); + if (rc < 0) + break; + + odm_write64(0x1, odm->rbase + ODM_VDMA_EN(qno)); + } + + return rc; +} + +int +odm_disable(struct odm_dev *odm) +{ + int qno, wait_cnt = ODM_IRING_IDLE_WAIT_CNT; + uint64_t val; + + /* Disable the queue and wait for the queue to became idle */ + for (qno = 0; qno < odm->num_qs; qno++) { + odm_write64(0x0, odm->rbase + ODM_VDMA_EN(qno)); + do { + val = odm_read64(odm->rbase + ODM_VDMA_IRING_BADDR(qno)); + } while ((!(val & 1ULL << 63)) && (--wait_cnt > 0)); + } + + return 0; +} + +int +odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc) +{ + struct odm_queue *vq = &odm->vq[vchan]; + int isize, csize, max_nb_desc, rc = 0; + union odm_mbox_msg mbox_msg; + const struct rte_memzone *mz; + char name[32]; + + if (vq->iring_mz != NULL) + odm_vchan_resc_free(odm, vchan); + + mbox_msg.u[0] = 0; + mbox_msg.u[1] = 0; + + /* ODM PF driver expects vfid starts from index 0 */ + mbox_msg.q.vfid = odm->vfid; + mbox_msg.q.cmd = ODM_QUEUE_OPEN; + mbox_msg.q.qidx = vchan; + rc = send_mbox_to_pf(odm, &mbox_msg, &mbox_msg); + if (rc < 0) + return rc; + + /* Determine instruction & completion ring sizes. */ + + /* Create iring that can support nb_desc. Round up to a multiple of 1024. */ + isize = RTE_ALIGN_CEIL(nb_desc * ODM_IRING_ENTRY_SIZE_MAX * 8, 1024); + isize = RTE_MIN(isize, ODM_IRING_MAX_SIZE); + snprintf(name, sizeof(name), "vq%d_iring%d", odm->vfid, vchan); + mz = rte_memzone_reserve_aligned(name, isize, 0, ODM_MEMZONE_FLAGS, 1024); + if (mz == NULL) + return -ENOMEM; + vq->iring_mz = mz; + vq->iring_max_words = isize / 8; + + /* Create cring that can support max instructions that can be inflight in hw. */ + max_nb_desc = (isize / (ODM_IRING_ENTRY_SIZE_MIN * 8)); + csize = RTE_ALIGN_CEIL(max_nb_desc * sizeof(union odm_cmpl_ent_s), 1024); + snprintf(name, sizeof(name), "vq%d_cring%d", odm->vfid, vchan); + mz = rte_memzone_reserve_aligned(name, csize, 0, ODM_MEMZONE_FLAGS, 1024); + if (mz == NULL) { + rc = -ENOMEM; + goto iring_free; + } + vq->cring_mz = mz; + vq->cring_max_entry = csize / 4; + + /* Allocate memory to track the size of each instruction. */ + snprintf(name, sizeof(name), "vq%d_extra%d", odm->vfid, vchan); + vq->extra_ins_sz = rte_zmalloc(name, vq->cring_max_entry, 0); + if (vq->extra_ins_sz == NULL) { + rc = -ENOMEM; + goto cring_free; + } + + vq->stats = (struct vq_stats){0}; + return rc; + +cring_free: + rte_memzone_free(odm->vq[vchan].cring_mz); + vq->cring_mz = NULL; +iring_free: + rte_memzone_free(odm->vq[vchan].iring_mz); + vq->iring_mz = NULL; + + return rc; +} + int odm_dev_init(struct odm_dev *odm) { diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h index 9fd3e30ad8..e1373e0c7f 100644 --- a/drivers/dma/odm/odm.h +++ b/drivers/dma/odm/odm.h @@ -9,7 +9,9 @@ #include #include +#include #include +#include extern int odm_logtype; @@ -54,6 +56,14 @@ extern int odm_logtype; #define ODM_MAX_QUEUES_PER_DEV 16 +#define ODM_IRING_MAX_SIZE (256 * 1024) +#define ODM_IRING_ENTRY_SIZE_MIN 4 +#define ODM_IRING_ENTRY_SIZE_MAX 13 +#define ODM_IRING_MAX_WORDS (ODM_IRING_MAX_SIZE / 8) +#define ODM_IRING_MAX_ENTRY (ODM_IRING_MAX_WORDS / ODM_IRING_ENTRY_SIZE_MIN) + +#define ODM_MAX_POINTER 4 + #define odm_read64(addr) rte_read64_relaxed((volatile void *)(addr)) #define odm_write64(val, addr) rte_write64_relaxed((val), (volatile void *)(addr)) @@ -66,6 +76,10 @@ extern int odm_logtype; RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ RTE_FMT_TAIL(__VA_ARGS__, ))) +#define ODM_MEMZONE_FLAGS \ + (RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \ + RTE_MEMZONE_512MB | RTE_MEMZONE_4GB | RTE_MEMZONE_SIZE_HINT_ONLY) + /** * Structure odm_instr_hdr_s for ODM * @@ -141,8 +155,48 @@ union odm_vdma_counts_s { } s; }; +struct vq_stats { + uint64_t submitted; + uint64_t completed; + uint64_t errors; + /* + * Since stats.completed is used to return completion index, account for any packets + * received before stats is reset. + */ + uint64_t completed_offset; +}; + +struct odm_queue { + struct odm_dev *dev; + /* Instructions that are prepared on the iring, but is not pushed to hw yet. */ + uint16_t pending_submit_cnt; + /* Length (in words) of instructions that are not yet pushed to hw. */ + uint16_t pending_submit_len; + uint16_t desc_idx; + /* Instruction ring head. Used for enqueue. */ + uint16_t iring_head; + /* Completion ring head. Used for dequeue. */ + uint16_t cring_head; + /* Extra instruction size ring head. Used in enqueue-dequeue.*/ + uint16_t ins_ring_head; + /* Extra instruction size ring tail. Used in enqueue-dequeue.*/ + uint16_t ins_ring_tail; + /* Instruction size available.*/ + uint16_t iring_sz_available; + /* Number of 8-byte words in iring.*/ + uint16_t iring_max_words; + /* Number of words in cring.*/ + uint16_t cring_max_entry; + /* Extra instruction size used per inflight instruction.*/ + uint8_t *extra_ins_sz; + struct vq_stats stats; + const struct rte_memzone *iring_mz; + const struct rte_memzone *cring_mz; +}; + struct __rte_cache_aligned odm_dev { struct rte_pci_device *pci_dev; + struct odm_queue vq[ODM_MAX_QUEUES_PER_DEV]; uint8_t *rbase; uint16_t vfid; uint8_t max_qs; @@ -151,5 +205,9 @@ struct __rte_cache_aligned odm_dev { int odm_dev_init(struct odm_dev *odm); int odm_dev_fini(struct odm_dev *odm); +int odm_configure(struct odm_dev *odm); +int odm_enable(struct odm_dev *odm); +int odm_disable(struct odm_dev *odm); +int odm_vchan_setup(struct odm_dev *odm, int vchan, int nb_desc); #endif /* _ODM_H_ */ diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c index bef335c10c..8c705978fe 100644 --- a/drivers/dma/odm/odm_dmadev.c +++ b/drivers/dma/odm/odm_dmadev.c @@ -17,6 +17,87 @@ #define PCI_DEVID_ODYSSEY_ODM_VF 0xA08C #define PCI_DRIVER_NAME dma_odm +static int +odm_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size) +{ + struct odm_dev *odm = NULL; + + RTE_SET_USED(size); + + odm = dev->fp_obj->dev_private; + + dev_info->max_vchans = odm->max_qs; + dev_info->nb_vchans = odm->num_qs; + dev_info->dev_capa = + (RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG); + dev_info->max_desc = ODM_IRING_MAX_ENTRY; + dev_info->min_desc = 1; + dev_info->max_sges = ODM_MAX_POINTER; + + return 0; +} + +static int +odm_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz) +{ + struct odm_dev *odm = NULL; + + RTE_SET_USED(conf_sz); + + odm = dev->fp_obj->dev_private; + odm->num_qs = conf->nb_vchans; + + return 0; +} + +static int +odm_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, + const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) +{ + struct odm_dev *odm = dev->fp_obj->dev_private; + + RTE_SET_USED(conf_sz); + return odm_vchan_setup(odm, vchan, conf->nb_desc); +} + +static int +odm_dmadev_start(struct rte_dma_dev *dev) +{ + struct odm_dev *odm = dev->fp_obj->dev_private; + + return odm_enable(odm); +} + +static int +odm_dmadev_stop(struct rte_dma_dev *dev) +{ + struct odm_dev *odm = dev->fp_obj->dev_private; + + return odm_disable(odm); +} + +static int +odm_dmadev_close(struct rte_dma_dev *dev) +{ + struct odm_dev *odm = dev->fp_obj->dev_private; + + odm_disable(odm); + odm_dev_fini(odm); + + return 0; +} + +static const struct rte_dma_dev_ops odm_dmadev_ops = { + .dev_close = odm_dmadev_close, + .dev_configure = odm_dmadev_configure, + .dev_info_get = odm_dmadev_info_get, + .dev_start = odm_dmadev_start, + .dev_stop = odm_dmadev_stop, + .stats_get = NULL, + .stats_reset = NULL, + .vchan_setup = odm_dmadev_vchan_setup, +}; + static int odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { @@ -40,6 +121,10 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev odm_info("DMA device %s probed", name); odm = dmadev->data->dev_private; + dmadev->device = &pci_dev->device; + dmadev->fp_obj->dev_private = odm; + dmadev->dev_ops = &odm_dmadev_ops; + odm->pci_dev = pci_dev; rc = odm_dev_init(odm); From patchwork Fri Apr 19 06:43:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139538 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 857C743EAC; Fri, 19 Apr 2024 08:44:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C602540693; Fri, 19 Apr 2024 08:43:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D93B54068A for ; Fri, 19 Apr 2024 08:43:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILYmm8010557; Thu, 18 Apr 2024 23:43:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=6WMlxj847p0Kwe9RE1g7zBvxffMnntcPApXKMwK9V3Q=; b=Xvo YCdM+6an99BIwWLgYvKZ+qIiOJ7M9TF/PcmGoGBXQijCXnMcCbzQpzNKUmyT2NCf hmekKOQYrI4qcu6t2YOTWIz4IGycPrKMiKbms0slQQDfHqLYAKQ5X4uVekebHOC3 X2QVunqzA2u9j6zVVazLBwPFJbvDOCx4Ll5VuI3vg9avDMOsqjCZO+OvxsaG8rse dcVNB8XTNTCbM29XozDDPYF7g3TD7Eq/rlQt9U+GHQyhACX4Mn8JzEMB8Ox+kNHs 9CIHyRl2WyFvMERQsefL9A9Ptvo5YtDZZTVyBePYAXyT1K6tzlNYsG7eU3uyw8nG mFGaplpFKIsSsJSgY+A== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8u3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:40 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:38 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id 2C2145B6933; Thu, 18 Apr 2024 23:43:35 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Gowrishankar Muthukrishnan , "Vidya Sagar Velumuri" , Subject: [PATCH v3 5/7] dma/odm: add stats Date: Fri, 19 Apr 2024 12:13:17 +0530 Message-ID: <20240419064319.149-6-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: DAW2EnzjjhX83xz1e2CPnpx1V3wuwAIM X-Proofpoint-GUID: DAW2EnzjjhX83xz1e2CPnpx1V3wuwAIM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Gowrishankar Muthukrishnan Add DMA dev stats. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- drivers/dma/odm/odm_dmadev.c | 63 ++++++++++++++++++++++++++++++++++-- 1 file changed, 61 insertions(+), 2 deletions(-) diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c index 8c705978fe..13b2588246 100644 --- a/drivers/dma/odm/odm_dmadev.c +++ b/drivers/dma/odm/odm_dmadev.c @@ -87,14 +87,73 @@ odm_dmadev_close(struct rte_dma_dev *dev) return 0; } +static int +odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats, + uint32_t size) +{ + struct odm_dev *odm = dev->fp_obj->dev_private; + + if (size < sizeof(rte_stats)) + return -EINVAL; + if (rte_stats == NULL) + return -EINVAL; + + if (vchan != RTE_DMA_ALL_VCHAN) { + struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[vchan].stats; + + *rte_stats = *stats; + } else { + int i; + + for (i = 0; i < odm->num_qs; i++) { + struct rte_dma_stats *stats = (struct rte_dma_stats *)&odm->vq[i].stats; + + rte_stats->submitted += stats->submitted; + rte_stats->completed += stats->completed; + rte_stats->errors += stats->errors; + } + } + + return 0; +} + +static void +odm_vq_stats_reset(struct vq_stats *vq_stats) +{ + vq_stats->completed_offset += vq_stats->completed; + vq_stats->completed = 0; + vq_stats->errors = 0; + vq_stats->submitted = 0; +} + +static int +odm_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) +{ + struct odm_dev *odm = dev->fp_obj->dev_private; + struct vq_stats *vq_stats; + int i; + + if (vchan != RTE_DMA_ALL_VCHAN) { + vq_stats = &odm->vq[vchan].stats; + odm_vq_stats_reset(vq_stats); + } else { + for (i = 0; i < odm->num_qs; i++) { + vq_stats = &odm->vq[i].stats; + odm_vq_stats_reset(vq_stats); + } + } + + return 0; +} + static const struct rte_dma_dev_ops odm_dmadev_ops = { .dev_close = odm_dmadev_close, .dev_configure = odm_dmadev_configure, .dev_info_get = odm_dmadev_info_get, .dev_start = odm_dmadev_start, .dev_stop = odm_dmadev_stop, - .stats_get = NULL, - .stats_reset = NULL, + .stats_get = odm_stats_get, + .stats_reset = odm_stats_reset, .vchan_setup = odm_dmadev_vchan_setup, }; From patchwork Fri Apr 19 06:43:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139539 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9022643EAC; Fri, 19 Apr 2024 08:44:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D45040698; Fri, 19 Apr 2024 08:43:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1DD5940697 for ; Fri, 19 Apr 2024 08:43:44 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILYmCp010590; Thu, 18 Apr 2024 23:43:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=/JYbzfDCKEv1ytqV/l4sKG+IeuvClHlA6BIkmawnX2s=; b=kvW WF6BVEZvZhh8bcOuMLSv5uoRnn/1mJOJIjEogu320AViiF7YlBtprLn+0oaFmQ/C p+SUp7zFL2hi4YjpsslGVTVMHcIY2jBpUX+Ea8m/uxRwGj63l7T1L1DPU04/hGoz /XAnCBFa4dzlDHpGDBB2f3bYne0YsRhv/qPTC4qTaPd5iKzMGsRRtyyHNH+UbKvU mJNZfCO63OKOGa6jItrQKfgP4FKJtkBlK8iTKcezU5FOMvK9XHvOPT4XiEPH0BxR OI9EWucOqvQWrsoxbp5ZmWbDRnvWTWwPlusKuNnj/ZNzt5JDKs8YEczgg/6mnSk+ mONbcr16GpjipxzKL0g== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8u9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:43 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:41 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id 4441A5B6933; Thu, 18 Apr 2024 23:43:39 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Vidya Sagar Velumuri , Gowrishankar Muthukrishnan , Subject: [PATCH v3 6/7] dma/odm: add copy and copy sg ops Date: Fri, 19 Apr 2024 12:13:18 +0530 Message-ID: <20240419064319.149-7-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: hr2P28_a-7PasZGa-vMpHyU-NEMspRSe X-Proofpoint-GUID: hr2P28_a-7PasZGa-vMpHyU-NEMspRSe X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri Add ODM copy and copy SG ops. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- drivers/dma/odm/odm_dmadev.c | 236 +++++++++++++++++++++++++++++++++++ 1 file changed, 236 insertions(+) diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c index 13b2588246..b21be83a89 100644 --- a/drivers/dma/odm/odm_dmadev.c +++ b/drivers/dma/odm/odm_dmadev.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include "odm.h" @@ -87,6 +88,238 @@ odm_dmadev_close(struct rte_dma_dev *dev) return 0; } +static int +odm_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length, + uint64_t flags) +{ + uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head; + const int num_words = ODM_IRING_ENTRY_SIZE_MIN; + struct odm_dev *odm = dev_private; + uint64_t *iring_head_ptr; + struct odm_queue *vq; + uint64_t h; + + const union odm_instr_hdr_s hdr = { + .s.ct = ODM_HDR_CT_CW_NC, + .s.xtype = ODM_XTYPE_INTERNAL, + .s.nfst = 1, + .s.nlst = 1, + }; + + vq = &odm->vq[vchan]; + + h = length; + h |= ((uint64_t)length << 32); + + const uint16_t max_iring_words = vq->iring_max_words; + + iring_sz_available = vq->iring_sz_available; + pending_submit_len = vq->pending_submit_len; + pending_submit_cnt = vq->pending_submit_cnt; + iring_head_ptr = vq->iring_mz->addr; + iring_head = vq->iring_head; + + if (iring_sz_available < num_words) + return -ENOSPC; + + if ((iring_head + num_words) >= max_iring_words) { + + iring_head_ptr[iring_head] = hdr.u; + iring_head = (iring_head + 1) % max_iring_words; + + iring_head_ptr[iring_head] = h; + iring_head = (iring_head + 1) % max_iring_words; + + iring_head_ptr[iring_head] = src; + iring_head = (iring_head + 1) % max_iring_words; + + iring_head_ptr[iring_head] = dst; + iring_head = (iring_head + 1) % max_iring_words; + } else { + iring_head_ptr[iring_head++] = hdr.u; + iring_head_ptr[iring_head++] = h; + iring_head_ptr[iring_head++] = src; + iring_head_ptr[iring_head++] = dst; + } + + pending_submit_len += num_words; + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan)); + vq->stats.submitted += pending_submit_cnt + 1; + vq->pending_submit_len = 0; + vq->pending_submit_cnt = 0; + } else { + vq->pending_submit_len = pending_submit_len; + vq->pending_submit_cnt++; + } + + vq->iring_head = iring_head; + + vq->iring_sz_available = iring_sz_available - num_words; + + /* No extra space to save. Skip entry in extra space ring. */ + vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry; + + return vq->desc_idx++; +} + +static inline void +odm_dmadev_fill_sg(uint64_t *cmd, const struct rte_dma_sge *src, const struct rte_dma_sge *dst, + uint16_t nb_src, uint16_t nb_dst, union odm_instr_hdr_s *hdr) +{ + int i = 0, j = 0; + uint64_t h = 0; + + cmd[j++] = hdr->u; + /* When nb_src is even */ + if (!(nb_src & 0x1)) { + /* Fill the iring with src pointers */ + for (i = 1; i < nb_src; i += 2) { + h = ((uint64_t)src[i].length << 32) | src[i - 1].length; + cmd[j++] = h; + cmd[j++] = src[i - 1].addr; + cmd[j++] = src[i].addr; + } + + /* Fill the iring with dst pointers */ + for (i = 1; i < nb_dst; i += 2) { + h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length; + cmd[j++] = h; + cmd[j++] = dst[i - 1].addr; + cmd[j++] = dst[i].addr; + } + + /* Handle the last dst pointer when nb_dst is odd */ + if (nb_dst & 0x1) { + h = dst[nb_dst - 1].length; + cmd[j++] = h; + cmd[j++] = dst[nb_dst - 1].addr; + cmd[j++] = 0; + } + } else { + /* When nb_src is odd */ + + /* Fill the iring with src pointers */ + for (i = 1; i < nb_src; i += 2) { + h = ((uint64_t)src[i].length << 32) | src[i - 1].length; + cmd[j++] = h; + cmd[j++] = src[i - 1].addr; + cmd[j++] = src[i].addr; + } + + /* Handle the last src pointer */ + h = ((uint64_t)dst[0].length << 32) | src[nb_src - 1].length; + cmd[j++] = h; + cmd[j++] = src[nb_src - 1].addr; + cmd[j++] = dst[0].addr; + + /* Fill the iring with dst pointers */ + for (i = 2; i < nb_dst; i += 2) { + h = ((uint64_t)dst[i].length << 32) | dst[i - 1].length; + cmd[j++] = h; + cmd[j++] = dst[i - 1].addr; + cmd[j++] = dst[i].addr; + } + + /* Handle the last dst pointer when nb_dst is even */ + if (!(nb_dst & 0x1)) { + h = dst[nb_dst - 1].length; + cmd[j++] = h; + cmd[j++] = dst[nb_dst - 1].addr; + cmd[j++] = 0; + } + } +} + +static int +odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src, + const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags) +{ + uint16_t pending_submit_len, pending_submit_cnt, iring_head, ins_ring_head; + uint16_t iring_sz_available, i, nb, num_words; + uint64_t cmd[ODM_IRING_ENTRY_SIZE_MAX]; + struct odm_dev *odm = dev_private; + uint32_t s_sz = 0, d_sz = 0; + uint64_t *iring_head_ptr; + struct odm_queue *vq; + union odm_instr_hdr_s hdr = { + .s.ct = ODM_HDR_CT_CW_NC, + .s.xtype = ODM_XTYPE_INTERNAL, + }; + + vq = &odm->vq[vchan]; + const uint16_t max_iring_words = vq->iring_max_words; + + iring_head_ptr = vq->iring_mz->addr; + iring_head = vq->iring_head; + iring_sz_available = vq->iring_sz_available; + ins_ring_head = vq->ins_ring_head; + pending_submit_len = vq->pending_submit_len; + pending_submit_cnt = vq->pending_submit_cnt; + + if (unlikely(nb_src > 4 || nb_dst > 4)) + return -EINVAL; + + for (i = 0; i < nb_src; i++) + s_sz += src[i].length; + + for (i = 0; i < nb_dst; i++) + d_sz += dst[i].length; + + if (s_sz != d_sz) + return -EINVAL; + + nb = nb_src + nb_dst; + hdr.s.nfst = nb_src; + hdr.s.nlst = nb_dst; + num_words = 1 + 3 * (nb / 2 + (nb & 0x1)); + + if (iring_sz_available < num_words) + return -ENOSPC; + + if ((iring_head + num_words) >= max_iring_words) { + uint16_t words_avail = max_iring_words - iring_head; + uint16_t words_pend = num_words - words_avail; + + if (unlikely(words_avail + words_pend > ODM_IRING_ENTRY_SIZE_MAX)) + return -ENOSPC; + + odm_dmadev_fill_sg(cmd, src, dst, nb_src, nb_dst, &hdr); + rte_memcpy((void *)&iring_head_ptr[iring_head], (void *)cmd, words_avail * 8); + rte_memcpy((void *)iring_head_ptr, (void *)&cmd[words_avail], words_pend * 8); + iring_head = words_pend; + } else { + odm_dmadev_fill_sg(&iring_head_ptr[iring_head], src, dst, nb_src, nb_dst, &hdr); + iring_head += num_words; + } + + pending_submit_len += num_words; + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan)); + vq->stats.submitted += pending_submit_cnt + 1; + vq->pending_submit_len = 0; + vq->pending_submit_cnt = 0; + } else { + vq->pending_submit_len = pending_submit_len; + vq->pending_submit_cnt++; + } + + vq->iring_head = iring_head; + + vq->iring_sz_available = iring_sz_available - num_words; + + /* Save extra space used for the instruction. */ + vq->extra_ins_sz[ins_ring_head] = num_words - 4; + + vq->ins_ring_head = (ins_ring_head + 1) % vq->cring_max_entry; + + return vq->desc_idx++; +} + static int odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats, uint32_t size) @@ -184,6 +417,9 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev dmadev->fp_obj->dev_private = odm; dmadev->dev_ops = &odm_dmadev_ops; + dmadev->fp_obj->copy = odm_dmadev_copy; + dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg; + odm->pci_dev = pci_dev; rc = odm_dev_init(odm); From patchwork Fri Apr 19 06:43:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anoob Joseph X-Patchwork-Id: 139540 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6861643EAC; Fri, 19 Apr 2024 08:44:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B69DF40A6D; Fri, 19 Apr 2024 08:43:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AE6D7406B8 for ; Fri, 19 Apr 2024 08:43:47 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 43ILYmcW010585; Thu, 18 Apr 2024 23:43:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=0iFhdggkUONNdEvQ/0kv0OB54Hd3402AMEVCmYOuKJI=; b=Tve fCJhymJF0GBuV2maYrvPiPceQ3UvoryUrx8L/queYlElwe3Cmsk2+m7SCxVXqbyR q2//4ylGkeMT/JJ2Z4KepAJB/DG84MiZUTWnkVFBRF0w+e0/xL+oo6mZC8dk8UZf QfL/sI3sUbni0DC5KGlpMuH1wHEN3RytaE1nyW+Dmd6DkRJ3obHRS2Fqxq4BH5rx MfPHpmbga+TvFtlWL8hNo6hFai/Ga6ezEW2DItneOh66lTqpl6WlH8GddbhuNDh4 AZD9vTnigd9mCLTye9p3KERBivj4anCnOmQ7YE/HKs090jnzMP7m47bnNiGn1jYk jOS6O3MlbsIwkzkk7Qg== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3xjhecq8ug-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 18 Apr 2024 23:43:46 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 18 Apr 2024 23:43:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Thu, 18 Apr 2024 23:43:44 -0700 Received: from BG-LT92004.corp.innovium.com (BG-LT92004.marvell.com [10.28.163.189]) by maili.marvell.com (Postfix) with ESMTP id 5A60C5B6933; Thu, 18 Apr 2024 23:43:42 -0700 (PDT) From: Anoob Joseph To: Chengwen Feng , Kevin Laatz , Bruce Richardson , "Jerin Jacob" , Thomas Monjalon CC: Vidya Sagar Velumuri , Gowrishankar Muthukrishnan , Subject: [PATCH v3 7/7] dma/odm: add remaining ops Date: Fri, 19 Apr 2024 12:13:19 +0530 Message-ID: <20240419064319.149-8-anoobj@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240419064319.149-1-anoobj@marvell.com> References: <20240417072708.322-1-anoobj@marvell.com> <20240419064319.149-1-anoobj@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: fF4qSHNRL3pJhFED37tAxVaG0rSja8xk X-Proofpoint-GUID: fF4qSHNRL3pJhFED37tAxVaG0rSja8xk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-04-19_04,2024-04-17_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vidya Sagar Velumuri Add all remaining ops such as fill, burst_capacity etc. Also update the documentation. Signed-off-by: Anoob Joseph Signed-off-by: Gowrishankar Muthukrishnan Signed-off-by: Vidya Sagar Velumuri --- MAINTAINERS | 1 + doc/guides/dmadevs/index.rst | 1 + doc/guides/dmadevs/odm.rst | 92 +++++++++++++ drivers/dma/odm/odm.h | 4 + drivers/dma/odm/odm_dmadev.c | 250 +++++++++++++++++++++++++++++++++++ 5 files changed, 348 insertions(+) create mode 100644 doc/guides/dmadevs/odm.rst diff --git a/MAINTAINERS b/MAINTAINERS index b8d2f7b3d8..38293008aa 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1273,6 +1273,7 @@ M: Gowrishankar Muthukrishnan M: Vidya Sagar Velumuri T: git://dpdk.org/next/dpdk-next-net-mrvl F: drivers/dma/odm/ +F: doc/guides/dmadevs/odm.rst NXP DPAA DMA M: Gagandeep Singh diff --git a/doc/guides/dmadevs/index.rst b/doc/guides/dmadevs/index.rst index 5bd25b32b9..ce9f6eb260 100644 --- a/doc/guides/dmadevs/index.rst +++ b/doc/guides/dmadevs/index.rst @@ -17,3 +17,4 @@ an application through DMA API. hisilicon idxd ioat + odm diff --git a/doc/guides/dmadevs/odm.rst b/doc/guides/dmadevs/odm.rst new file mode 100644 index 0000000000..a2eaab59a0 --- /dev/null +++ b/doc/guides/dmadevs/odm.rst @@ -0,0 +1,92 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2024 Marvell. + +Odyssey ODM DMA Device Driver +============================= + +The ``odm`` DMA device driver provides a poll-mode driver (PMD) for Marvell Odyssey +DMA Hardware Accelerator block found in Odyssey SoC. The block supports only mem +to mem DMA transfers. + +ODM DMA device can support up to 32 queues and 16 VFs. + +Prerequisites and Compilation procedure +--------------------------------------- + +Device Setup +------------- + +ODM DMA device is initialized by kernel PF driver. The PF kernel driver is part +of Marvell software packages for Odyssey. + +Kernel module can be inserted as in below example:: + + $ sudo insmod odyssey_odm.ko + +ODM DMA device can support up to 16 VFs:: + + $ sudo echo 16 > /sys/bus/pci/devices/0000\:08\:00.0/sriov_numvfs + +Above command creates 16 VFs with 2 queues each. + +The ``dpdk-devbind.py`` script, included with DPDK, can be used to show the +presence of supported hardware. Running ``dpdk-devbind.py --status-dev dma`` +will show all the Odyssey ODM DMA devices. + +Devices using VFIO drivers +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The HW devices to be used will need to be bound to a user-space IO driver. +The ``dpdk-devbind.py`` script can be used to view the state of the devices +and to bind them to a suitable DPDK-supported driver, such as ``vfio-pci``. +For example:: + + $ dpdk-devbind.py -b vfio-pci 0000:08:00.1 + +Device Probing and Initialization +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To use the devices from an application, the dmadev API can be used. + +Once configured, the device can then be made ready for use +by calling the ``rte_dma_start()`` API. + +Performing Data Copies +~~~~~~~~~~~~~~~~~~~~~~ + +Refer to the :ref:`Enqueue / Dequeue APIs ` section +of the dmadev library documentation for details on operation enqueue and +submission API usage. + +Performance Tuning Parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To achieve higher performance, DMA device needs to be tuned using PF kernel +driver module parameters. + +Following options are exposed by kernel PF driver via devlink interface for +tuning performance. + +``eng_sel`` + + ODM DMA device has 2 engines internally. Engine to queue mapping is decided + by a hardware register which can be configured as below:: + + $ /sbin/devlink dev param set pci/0000:08:00.0 name eng_sel value 3435973836 cmode runtime + + Each bit in the register corresponds to one queue. Each queue would be + associated with one engine. If the value of the bit corresponding to the queue + is 0, then engine 0 would be picked. If it is 1, then engine 1 would be + picked. + + In the above command, the register value is set as + ``1100 1100 1100 1100 1100 1100 1100 1100`` which allows for alternate engines + to be used with alternate VFs (assuming the system has 16 VFs with 2 queues + each). + +``max_load_request`` + + Specifies maximum outstanding load requests on internal bus. Values can range + from 1 to 512. Set to 512 for maximum requests in flight.:: + + $ /sbin/devlink dev param set pci/0000:08:00.0 name max_load_request value 512 cmode runtime diff --git a/drivers/dma/odm/odm.h b/drivers/dma/odm/odm.h index e1373e0c7f..1d60d2d11a 100644 --- a/drivers/dma/odm/odm.h +++ b/drivers/dma/odm/odm.h @@ -75,6 +75,10 @@ extern int odm_logtype; rte_log(RTE_LOG_INFO, odm_logtype, \ RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ RTE_FMT_TAIL(__VA_ARGS__, ))) +#define odm_debug(...) \ + rte_log(RTE_LOG_DEBUG, odm_logtype, \ + RTE_FMT("%s(): %u" RTE_FMT_HEAD(__VA_ARGS__, ), __func__, __LINE__, \ + RTE_FMT_TAIL(__VA_ARGS__, ))) #define ODM_MEMZONE_FLAGS \ (RTE_MEMZONE_1GB | RTE_MEMZONE_16MB | RTE_MEMZONE_16GB | RTE_MEMZONE_256MB | \ diff --git a/drivers/dma/odm/odm_dmadev.c b/drivers/dma/odm/odm_dmadev.c index b21be83a89..57bd6923f1 100644 --- a/drivers/dma/odm/odm_dmadev.c +++ b/drivers/dma/odm/odm_dmadev.c @@ -320,6 +320,251 @@ odm_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge * return vq->desc_idx++; } +static int +odm_dmadev_fill(void *dev_private, uint16_t vchan, uint64_t pattern, rte_iova_t dst, + uint32_t length, uint64_t flags) +{ + uint16_t pending_submit_len, pending_submit_cnt, iring_sz_available, iring_head; + const int num_words = ODM_IRING_ENTRY_SIZE_MIN; + struct odm_dev *odm = dev_private; + uint64_t *iring_head_ptr; + struct odm_queue *vq; + uint64_t h; + + vq = &odm->vq[vchan]; + + union odm_instr_hdr_s hdr = { + .s.ct = ODM_HDR_CT_CW_NC, + .s.nfst = 0, + .s.nlst = 1, + }; + + h = (uint64_t)length; + + switch (pattern) { + case 0: + hdr.s.xtype = ODM_XTYPE_FILL0; + break; + case 0xffffffffffffffff: + hdr.s.xtype = ODM_XTYPE_FILL1; + break; + default: + return -ENOTSUP; + } + + const uint16_t max_iring_words = vq->iring_max_words; + + iring_sz_available = vq->iring_sz_available; + pending_submit_len = vq->pending_submit_len; + pending_submit_cnt = vq->pending_submit_cnt; + iring_head_ptr = vq->iring_mz->addr; + iring_head = vq->iring_head; + + if (iring_sz_available < num_words) + return -ENOSPC; + + if ((iring_head + num_words) >= max_iring_words) { + + iring_head_ptr[iring_head] = hdr.u; + iring_head = (iring_head + 1) % max_iring_words; + + iring_head_ptr[iring_head] = h; + iring_head = (iring_head + 1) % max_iring_words; + + iring_head_ptr[iring_head] = dst; + iring_head = (iring_head + 1) % max_iring_words; + + iring_head_ptr[iring_head] = 0; + iring_head = (iring_head + 1) % max_iring_words; + } else { + iring_head_ptr[iring_head] = hdr.u; + iring_head_ptr[iring_head + 1] = h; + iring_head_ptr[iring_head + 2] = dst; + iring_head_ptr[iring_head + 3] = 0; + iring_head += num_words; + } + + pending_submit_len += num_words; + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan)); + vq->stats.submitted += pending_submit_cnt + 1; + vq->pending_submit_len = 0; + vq->pending_submit_cnt = 0; + } else { + vq->pending_submit_len = pending_submit_len; + vq->pending_submit_cnt++; + } + + vq->iring_head = iring_head; + vq->iring_sz_available = iring_sz_available - num_words; + + /* No extra space to save. Skip entry in extra space ring. */ + vq->ins_ring_head = (vq->ins_ring_head + 1) % vq->cring_max_entry; + + vq->iring_sz_available = iring_sz_available - num_words; + + return vq->desc_idx++; +} + +static uint16_t +odm_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx, + bool *has_error) +{ + const union odm_cmpl_ent_s cmpl_zero = {0}; + uint16_t cring_head, iring_sz_available; + struct odm_dev *odm = dev_private; + union odm_cmpl_ent_s cmpl; + struct odm_queue *vq; + uint64_t nb_err = 0; + uint32_t *cmpl_ptr; + int cnt; + + vq = &odm->vq[vchan]; + const uint32_t *base_addr = vq->cring_mz->addr; + const uint16_t cring_max_entry = vq->cring_max_entry; + + cring_head = vq->cring_head; + iring_sz_available = vq->iring_sz_available; + + if (unlikely(vq->stats.submitted == vq->stats.completed)) { + *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF; + return 0; + } + + for (cnt = 0; cnt < nb_cpls; cnt++) { + cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl)); + cmpl.u = rte_atomic_load_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, + rte_memory_order_relaxed); + if (!cmpl.s.valid) + break; + + if (cmpl.s.cmp_code) + nb_err++; + + /* Free space for enqueue */ + iring_sz_available += 4 + vq->extra_ins_sz[cring_head]; + + /* Clear instruction extra space */ + vq->extra_ins_sz[cring_head] = 0; + + rte_atomic_store_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, cmpl_zero.u, + rte_memory_order_relaxed); + cring_head = (cring_head + 1) % cring_max_entry; + } + + vq->stats.errors += nb_err; + + if (unlikely(has_error != NULL && nb_err)) + *has_error = true; + + vq->cring_head = cring_head; + vq->iring_sz_available = iring_sz_available; + + vq->stats.completed += cnt; + + *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF; + + return cnt; +} + +static uint16_t +odm_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status) +{ + const union odm_cmpl_ent_s cmpl_zero = {0}; + uint16_t cring_head, iring_sz_available; + struct odm_dev *odm = dev_private; + union odm_cmpl_ent_s cmpl; + struct odm_queue *vq; + uint32_t *cmpl_ptr; + int cnt; + + vq = &odm->vq[vchan]; + const uint32_t *base_addr = vq->cring_mz->addr; + const uint16_t cring_max_entry = vq->cring_max_entry; + + cring_head = vq->cring_head; + iring_sz_available = vq->iring_sz_available; + + if (vq->stats.submitted == vq->stats.completed) { + *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF; + return 0; + } + +#ifdef ODM_DEBUG + odm_debug("cring_head: 0x%" PRIx16, cring_head); + odm_debug("Submitted: 0x%" PRIx64, vq->stats.submitted); + odm_debug("Completed: 0x%" PRIx64, vq->stats.completed); + odm_debug("Hardware count: 0x%" PRIx64, odm_read64(odm->rbase + ODM_VDMA_CNT(vchan))); +#endif + + for (cnt = 0; cnt < nb_cpls; cnt++) { + cmpl_ptr = RTE_PTR_ADD(base_addr, cring_head * sizeof(cmpl)); + cmpl.u = rte_atomic_load_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, + rte_memory_order_relaxed); + if (!cmpl.s.valid) + break; + + status[cnt] = cmpl.s.cmp_code; + + if (cmpl.s.cmp_code) + vq->stats.errors++; + + /* Free space for enqueue */ + iring_sz_available += 4 + vq->extra_ins_sz[cring_head]; + + /* Clear instruction extra space */ + vq->extra_ins_sz[cring_head] = 0; + + rte_atomic_store_explicit((RTE_ATOMIC(uint32_t) *)cmpl_ptr, cmpl_zero.u, + rte_memory_order_relaxed); + cring_head = (cring_head + 1) % cring_max_entry; + } + + vq->cring_head = cring_head; + vq->iring_sz_available = iring_sz_available; + + vq->stats.completed += cnt; + + *last_idx = (vq->stats.completed_offset + vq->stats.completed - 1) & 0xFFFF; + + return cnt; +} + +static int +odm_dmadev_submit(void *dev_private, uint16_t vchan) +{ + struct odm_dev *odm = dev_private; + uint16_t pending_submit_len; + struct odm_queue *vq; + + vq = &odm->vq[vchan]; + pending_submit_len = vq->pending_submit_len; + + if (pending_submit_len == 0) + return 0; + + rte_wmb(); + odm_write64(pending_submit_len, odm->rbase + ODM_VDMA_DBELL(vchan)); + vq->pending_submit_len = 0; + vq->stats.submitted += vq->pending_submit_cnt; + vq->pending_submit_cnt = 0; + + return 0; +} + +static uint16_t +odm_dmadev_burst_capacity(const void *dev_private, uint16_t vchan __rte_unused) +{ + const struct odm_dev *odm = dev_private; + const struct odm_queue *vq; + + vq = &odm->vq[vchan]; + return (vq->iring_sz_available / ODM_IRING_ENTRY_SIZE_MIN); +} + static int odm_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats, uint32_t size) @@ -419,6 +664,11 @@ odm_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_dev dmadev->fp_obj->copy = odm_dmadev_copy; dmadev->fp_obj->copy_sg = odm_dmadev_copy_sg; + dmadev->fp_obj->fill = odm_dmadev_fill; + dmadev->fp_obj->submit = odm_dmadev_submit; + dmadev->fp_obj->completed = odm_dmadev_completed; + dmadev->fp_obj->completed_status = odm_dmadev_completed_status; + dmadev->fp_obj->burst_capacity = odm_dmadev_burst_capacity; odm->pci_dev = pci_dev;