From patchwork Wed Aug 30 07:56:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 130868 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 111F741F63; Wed, 30 Aug 2023 09:57:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9B2CC40279; Wed, 30 Aug 2023 09:57:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 59F5640277 for ; Wed, 30 Aug 2023 09:57:03 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37U5u8Sb027898 for ; Wed, 30 Aug 2023 00:57:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=UFBBsQ9181xxmU5V5uRHagAnbIKneqA3/lvs97z+T3g=; b=D4XFdg4GZm0vt2XsM9eKRNXoAm3kMYQPYW4U8/93aKY1YUqLPnaJp2IX6sGDxr7r/4Fe wdK9LypeV4CSXuQzTVl4Bm7e+VN1XbdREpTFtNDXB0Ol2RhWhWfLtl9WndI+kvFdVVch rLiaOTbPRBjU7jGH/uaZHqYSVLrZMoUB6bqztIAU7k0w/H1LXMi5fuytkEkKLGmSLPv+ z33PmfnqosO9acDYaufeNKcNL7IAOOW1U7HjRsv6VvZU2tjglvyZVVLvwCeFGDyke+5R MzE0b/aYZPnY7+OsuFqU+wfTGKT/v0rJg1OcUaXFppDmwD/BFG+HfAEPGDm3gSidplZx 0Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sqgwkmwf5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Aug 2023 00:57:02 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 30 Aug 2023 00:57:00 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 30 Aug 2023 00:57:00 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 802EA3F7065; Wed, 30 Aug 2023 00:56:57 -0700 (PDT) From: To: , Nithin Dabilpuram , "Kiran Kumar K" , Sunil Kumar Kori , Satha Rao , Vamsi Attunuru CC: , Pavan Nikhilesh Subject: [PATCH 1/2] dma/cnxk: use mempool for DMA chunk pool Date: Wed, 30 Aug 2023 13:26:54 +0530 Message-ID: <20230830075655.8004-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: YLPJGKtx5_QvVqhEhskYiquLFh2XFf4F X-Proofpoint-ORIG-GUID: YLPJGKtx5_QvVqhEhskYiquLFh2XFf4F X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-29_16,2023-08-29_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Use rte_mempool for DMA chunk pool to allow using mempool cache. Signed-off-by: Pavan Nikhilesh --- drivers/common/cnxk/roc_dpi.c | 95 +++++-------------------- drivers/common/cnxk/roc_dpi.h | 28 +------- drivers/common/cnxk/roc_dpi_priv.h | 3 - drivers/common/cnxk/roc_platform.c | 1 + drivers/common/cnxk/roc_platform.h | 2 + drivers/common/cnxk/version.map | 1 + drivers/dma/cnxk/cnxk_dmadev.c | 108 +++++++++++++++++++++-------- drivers/dma/cnxk/cnxk_dmadev.h | 10 ++- 8 files changed, 110 insertions(+), 138 deletions(-) diff --git a/drivers/common/cnxk/roc_dpi.c b/drivers/common/cnxk/roc_dpi.c index 0e2f803077..9cb479371a 100644 --- a/drivers/common/cnxk/roc_dpi.c +++ b/drivers/common/cnxk/roc_dpi.c @@ -1,14 +1,14 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(C) 2021 Marvell. */ + +#include "roc_api.h" +#include "roc_priv.h" #include #include #include #include -#include "roc_api.h" -#include "roc_priv.h" - #define DPI_PF_MBOX_SYSFS_ENTRY "dpi_device_config" static inline int @@ -52,17 +52,12 @@ roc_dpi_disable(struct roc_dpi *dpi) } int -roc_dpi_configure(struct roc_dpi *roc_dpi) +roc_dpi_configure(struct roc_dpi *roc_dpi, uint32_t chunk_sz, uint64_t aura, uint64_t chunk_base) { struct plt_pci_device *pci_dev; - const struct plt_memzone *dpi_mz; dpi_mbox_msg_t mbox_msg; - struct npa_pool_s pool; - struct npa_aura_s aura; - int rc, count, buflen; - uint64_t aura_handle; - plt_iova_t iova; - char name[32]; + uint64_t reg; + int rc; if (!roc_dpi) { plt_err("roc_dpi is NULL"); @@ -70,80 +65,31 @@ roc_dpi_configure(struct roc_dpi *roc_dpi) } pci_dev = roc_dpi->pci_dev; - memset(&pool, 0, sizeof(struct npa_pool_s)); - pool.nat_align = 1; - - memset(&aura, 0, sizeof(aura)); - rc = roc_npa_pool_create(&aura_handle, DPI_CMD_QUEUE_SIZE, - DPI_CMD_QUEUE_BUFS, &aura, &pool, 0); - if (rc) { - plt_err("Failed to create NPA pool, err %d\n", rc); - return rc; - } - - snprintf(name, sizeof(name), "dpimem%d:%d:%d:%d", pci_dev->addr.domain, pci_dev->addr.bus, - pci_dev->addr.devid, pci_dev->addr.function); - buflen = DPI_CMD_QUEUE_SIZE * DPI_CMD_QUEUE_BUFS; - dpi_mz = plt_memzone_reserve_aligned(name, buflen, 0, DPI_CMD_QUEUE_SIZE); - if (dpi_mz == NULL) { - plt_err("dpi memzone reserve failed"); - rc = -ENOMEM; - goto err1; - } - - roc_dpi->mz = dpi_mz; - iova = dpi_mz->iova; - for (count = 0; count < DPI_CMD_QUEUE_BUFS; count++) { - roc_npa_aura_op_free(aura_handle, 0, iova); - iova += DPI_CMD_QUEUE_SIZE; - } - - roc_dpi->chunk_base = (void *)roc_npa_aura_op_alloc(aura_handle, 0); - if (!roc_dpi->chunk_base) { - plt_err("Failed to alloc buffer from NPA aura"); - rc = -ENOMEM; - goto err2; - } - roc_dpi->chunk_next = (void *)roc_npa_aura_op_alloc(aura_handle, 0); - if (!roc_dpi->chunk_next) { - plt_err("Failed to alloc buffer from NPA aura"); - rc = -ENOMEM; - goto err2; - } - - roc_dpi->aura_handle = aura_handle; - /* subtract 2 as they have already been alloc'ed above */ - roc_dpi->pool_size_m1 = (DPI_CMD_QUEUE_SIZE >> 3) - 2; + roc_dpi_disable(roc_dpi); + reg = plt_read64(roc_dpi->rbase + DPI_VDMA_SADDR); + while (!(reg & BIT_ULL(63))) + reg = plt_read64(roc_dpi->rbase + DPI_VDMA_SADDR); plt_write64(0x0, roc_dpi->rbase + DPI_VDMA_REQQ_CTL); - plt_write64(((uint64_t)(roc_dpi->chunk_base) >> 7) << 7, - roc_dpi->rbase + DPI_VDMA_SADDR); + plt_write64(chunk_base, roc_dpi->rbase + DPI_VDMA_SADDR); mbox_msg.u[0] = 0; mbox_msg.u[1] = 0; /* DPI PF driver expects vfid starts from index 0 */ mbox_msg.s.vfid = roc_dpi->vfid; mbox_msg.s.cmd = DPI_QUEUE_OPEN; - mbox_msg.s.csize = DPI_CMD_QUEUE_SIZE; - mbox_msg.s.aura = roc_npa_aura_handle_to_aura(aura_handle); + mbox_msg.s.csize = chunk_sz; + mbox_msg.s.aura = aura; mbox_msg.s.sso_pf_func = idev_sso_pffunc_get(); mbox_msg.s.npa_pf_func = idev_npa_pffunc_get(); rc = send_msg_to_pf(&pci_dev->addr, (const char *)&mbox_msg, sizeof(dpi_mbox_msg_t)); - if (rc < 0) { + if (rc < 0) plt_err("Failed to send mbox message %d to DPI PF, err %d", mbox_msg.s.cmd, rc); - goto err2; - } return rc; - -err2: - plt_memzone_free(dpi_mz); -err1: - roc_npa_pool_destroy(aura_handle); - return rc; } int @@ -153,11 +99,9 @@ roc_dpi_dev_init(struct roc_dpi *roc_dpi) uint16_t vfid; roc_dpi->rbase = pci_dev->mem_resource[0].addr; - vfid = ((pci_dev->addr.devid & 0x1F) << 3) | - (pci_dev->addr.function & 0x7); + vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7); vfid -= 1; roc_dpi->vfid = vfid; - plt_spinlock_init(&roc_dpi->chunk_lock); return 0; } @@ -180,14 +124,9 @@ roc_dpi_dev_fini(struct roc_dpi *roc_dpi) mbox_msg.s.vfid = roc_dpi->vfid; mbox_msg.s.cmd = DPI_QUEUE_CLOSE; - rc = send_msg_to_pf(&pci_dev->addr, (const char *)&mbox_msg, - sizeof(dpi_mbox_msg_t)); + rc = send_msg_to_pf(&pci_dev->addr, (const char *)&mbox_msg, sizeof(dpi_mbox_msg_t)); if (rc < 0) - plt_err("Failed to send mbox message %d to DPI PF, err %d", - mbox_msg.s.cmd, rc); - - roc_npa_pool_destroy(roc_dpi->aura_handle); - plt_memzone_free(roc_dpi->mz); + plt_err("Failed to send mbox message %d to DPI PF, err %d", mbox_msg.s.cmd, rc); return rc; } diff --git a/drivers/common/cnxk/roc_dpi.h b/drivers/common/cnxk/roc_dpi.h index 2f061b07c5..4ebde5b8a6 100644 --- a/drivers/common/cnxk/roc_dpi.h +++ b/drivers/common/cnxk/roc_dpi.h @@ -5,41 +5,17 @@ #ifndef _ROC_DPI_H_ #define _ROC_DPI_H_ -struct roc_dpi_args { - uint8_t num_ssegs; - uint8_t num_dsegs; - uint8_t comp_type; - uint8_t direction; - uint8_t sdevice; - uint8_t ddevice; - uint8_t swap; - uint8_t use_lock : 1; - uint8_t tt : 7; - uint16_t func; - uint16_t grp; - uint32_t tag; - uint64_t comp_ptr; -}; - struct roc_dpi { - /* Input parameters */ struct plt_pci_device *pci_dev; - /* End of Input parameters */ - const struct plt_memzone *mz; uint8_t *rbase; uint16_t vfid; - uint16_t pool_size_m1; - uint16_t chunk_head; - uint64_t *chunk_base; - uint64_t *chunk_next; - uint64_t aura_handle; - plt_spinlock_t chunk_lock; } __plt_cache_aligned; int __roc_api roc_dpi_dev_init(struct roc_dpi *roc_dpi); int __roc_api roc_dpi_dev_fini(struct roc_dpi *roc_dpi); -int __roc_api roc_dpi_configure(struct roc_dpi *dpi); +int __roc_api roc_dpi_configure(struct roc_dpi *dpi, uint32_t chunk_sz, uint64_t aura, + uint64_t chunk_base); int __roc_api roc_dpi_enable(struct roc_dpi *dpi); int __roc_api roc_dpi_disable(struct roc_dpi *dpi); diff --git a/drivers/common/cnxk/roc_dpi_priv.h b/drivers/common/cnxk/roc_dpi_priv.h index 1fa1a715d3..518a3e7351 100644 --- a/drivers/common/cnxk/roc_dpi_priv.h +++ b/drivers/common/cnxk/roc_dpi_priv.h @@ -16,9 +16,6 @@ #define DPI_REG_DUMP 0x3 #define DPI_GET_REG_CFG 0x4 -#define DPI_CMD_QUEUE_SIZE 4096 -#define DPI_CMD_QUEUE_BUFS 1024 - typedef union dpi_mbox_msg_t { uint64_t u[2]; struct dpi_mbox_message_s { diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c index f91b95ceab..f8287bcf6b 100644 --- a/drivers/common/cnxk/roc_platform.c +++ b/drivers/common/cnxk/roc_platform.c @@ -70,4 +70,5 @@ RTE_LOG_REGISTER(cnxk_logtype_npc, pmd.net.cnxk.flow, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_sso, pmd.event.cnxk, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tim, pmd.event.cnxk.timer, NOTICE); RTE_LOG_REGISTER(cnxk_logtype_tm, pmd.net.cnxk.tm, NOTICE); +RTE_LOG_REGISTER(cnxk_logtype_dpi, pmd.dma.cnxk.dpi, NOTICE); RTE_LOG_REGISTER_DEFAULT(cnxk_logtype_ree, NOTICE); diff --git a/drivers/common/cnxk/roc_platform.h b/drivers/common/cnxk/roc_platform.h index 08f83aba12..dfd4da21b6 100644 --- a/drivers/common/cnxk/roc_platform.h +++ b/drivers/common/cnxk/roc_platform.h @@ -242,6 +242,7 @@ extern int cnxk_logtype_sso; extern int cnxk_logtype_tim; extern int cnxk_logtype_tm; extern int cnxk_logtype_ree; +extern int cnxk_logtype_dpi; #define plt_err(fmt, args...) \ RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", __func__, __LINE__, ##args) @@ -270,6 +271,7 @@ extern int cnxk_logtype_ree; #define plt_tim_dbg(fmt, ...) plt_dbg(tim, fmt, ##__VA_ARGS__) #define plt_tm_dbg(fmt, ...) plt_dbg(tm, fmt, ##__VA_ARGS__) #define plt_ree_dbg(fmt, ...) plt_dbg(ree, fmt, ##__VA_ARGS__) +#define plt_dpi_dbg(fmt, ...) plt_dbg(dpi, fmt, ##__VA_ARGS__) /* Datapath logs */ #define plt_dp_err(fmt, args...) \ diff --git a/drivers/common/cnxk/version.map b/drivers/common/cnxk/version.map index 8c71497df8..1540dfadf9 100644 --- a/drivers/common/cnxk/version.map +++ b/drivers/common/cnxk/version.map @@ -7,6 +7,7 @@ INTERNAL { cnxk_ipsec_outb_roundup_byte; cnxk_logtype_base; cnxk_logtype_cpt; + cnxk_logtype_dpi; cnxk_logtype_mbox; cnxk_logtype_ml; cnxk_logtype_nix; diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index eec6a897e2..35c2b79156 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -70,10 +71,54 @@ cnxk_dmadev_vchan_free(struct cnxk_dpi_vf_s *dpivf, uint16_t vchan) return 0; } +static int +cnxk_dmadev_chunk_pool_create(struct rte_dma_dev *dev) +{ + char pool_name[RTE_MEMPOOL_NAMESIZE]; + struct cnxk_dpi_vf_s *dpivf = NULL; + uint64_t nb_chunks; + int rc; + + dpivf = dev->fp_obj->dev_private; + /* Create chunk pool. */ + snprintf(pool_name, sizeof(pool_name), "cnxk_dma_chunk_pool%d", dev->data->dev_id); + + nb_chunks = DPI_CMD_QUEUE_BUFS; + nb_chunks += (CNXK_DMA_POOL_MAX_CACHE_SZ * rte_lcore_count()); + dpivf->chunk_pool = + rte_mempool_create_empty(pool_name, nb_chunks, DPI_CMD_QUEUE_BUF_SIZE, + CNXK_DMA_POOL_MAX_CACHE_SZ, 0, rte_socket_id(), 0); + + if (dpivf->chunk_pool == NULL) { + plt_err("Unable to create chunkpool."); + return -ENOMEM; + } + + rc = rte_mempool_set_ops_byname(dpivf->chunk_pool, rte_mbuf_platform_mempool_ops(), NULL); + if (rc < 0) { + plt_err("Unable to set chunkpool ops"); + goto free; + } + + rc = rte_mempool_populate_default(dpivf->chunk_pool); + if (rc < 0) { + plt_err("Unable to set populate chunkpool."); + goto free; + } + dpivf->aura = roc_npa_aura_handle_to_aura(dpivf->chunk_pool->pool_id); + + return 0; + +free: + rte_mempool_free(dpivf->chunk_pool); + return rc; +} + static int cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = NULL; + void *chunk; int rc = 0; RTE_SET_USED(conf_sz); @@ -92,12 +137,29 @@ cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, if (dpivf->flag & CNXK_DPI_DEV_CONFIG) return rc; - rc = roc_dpi_configure(&dpivf->rdpi); + rc = cnxk_dmadev_chunk_pool_create(dev); + if (rc < 0) { + plt_err("DMA pool configure failed err = %d", rc); + goto done; + } + + rc = rte_mempool_get(dpivf->chunk_pool, &chunk); + if (rc < 0) { + plt_err("DMA failed to get chunk pointer err = %d", rc); + rte_mempool_free(dpivf->chunk_pool); + goto done; + } + + rc = roc_dpi_configure(&dpivf->rdpi, DPI_CMD_QUEUE_BUF_SIZE, dpivf->aura, (uint64_t)chunk); if (rc < 0) { plt_err("DMA configure failed err = %d", rc); + rte_mempool_free(dpivf->chunk_pool); goto done; } + dpivf->chunk_base = chunk; + dpivf->chunk_head = 0; + dpivf->chunk_size_m1 = (DPI_CMD_QUEUE_BUF_SIZE >> 3) - 2; dpivf->flag |= CNXK_DPI_DEV_CONFIG; done: @@ -335,7 +397,7 @@ cnxk_dmadev_close(struct rte_dma_dev *dev) } static inline int -__dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) +__dpi_queue_write(struct cnxk_dpi_vf_s *dpi, uint64_t *cmds, int cmd_count) { uint64_t *ptr = dpi->chunk_base; @@ -346,31 +408,25 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) * Normally there is plenty of room in the current buffer for the * command */ - if (dpi->chunk_head + cmd_count < dpi->pool_size_m1) { + if (dpi->chunk_head + cmd_count < dpi->chunk_size_m1) { ptr += dpi->chunk_head; dpi->chunk_head += cmd_count; while (cmd_count--) *ptr++ = *cmds++; } else { + uint64_t *new_buff = NULL; int count; - uint64_t *new_buff = dpi->chunk_next; - - dpi->chunk_next = (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); - if (!dpi->chunk_next) { - plt_dp_dbg("Failed to alloc next buffer from NPA"); - /* NPA failed to allocate a buffer. Restoring chunk_next - * to its original address. - */ - dpi->chunk_next = new_buff; - return -ENOSPC; + if (rte_mempool_get(dpi->chunk_pool, (void **)&new_buff) < 0) { + plt_dpi_dbg("Failed to alloc next buffer from NPA"); + return -ENOMEM; } /* * Figure out how many cmd words will fit in this buffer. * One location will be needed for the next buffer pointer. */ - count = dpi->pool_size_m1 - dpi->chunk_head; + count = dpi->chunk_size_m1 - dpi->chunk_head; ptr += dpi->chunk_head; cmd_count -= count; while (count--) @@ -395,19 +451,11 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) *ptr++ = *cmds++; /* queue index may be greater than pool size */ - if (dpi->chunk_head >= dpi->pool_size_m1) { - new_buff = dpi->chunk_next; - dpi->chunk_next = (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); - if (!dpi->chunk_next) { - plt_dp_dbg("Failed to alloc next buffer from NPA"); - - /* NPA failed to allocate a buffer. Restoring chunk_next - * to its original address. - */ - dpi->chunk_next = new_buff; - return -ENOSPC; + if (dpi->chunk_head == dpi->chunk_size_m1) { + if (rte_mempool_get(dpi->chunk_pool, (void **)&new_buff) < 0) { + plt_dpi_dbg("Failed to alloc next buffer from NPA"); + return -ENOMEM; } - /* Write next buffer address */ *ptr = (uint64_t)new_buff; dpi->chunk_base = new_buff; @@ -465,7 +513,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d cmd[num_words++] = length; cmd[num_words++] = lptr; - rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + rc = __dpi_queue_write(dpivf, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpi_conf->c_desc, tail); return rc; @@ -537,7 +585,7 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge lptr++; } - rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + rc = __dpi_queue_write(dpivf, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpi_conf->c_desc, tail); return rc; @@ -593,7 +641,7 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t cmd[num_words++] = length; cmd[num_words++] = lptr; - rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + rc = __dpi_queue_write(dpivf, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpi_conf->c_desc, tail); return rc; @@ -656,7 +704,7 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge lptr++; } - rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + rc = __dpi_queue_write(dpivf, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpi_conf->c_desc, tail); return rc; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 254e7fea20..65f12d844d 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -12,12 +12,15 @@ #define DPI_MAX_DESC 2048 #define DPI_MIN_DESC 2 #define MAX_VCHANS_PER_QUEUE 4 +#define DPI_CMD_QUEUE_BUF_SIZE 4096 +#define DPI_CMD_QUEUE_BUFS 1024 /* Set Completion data to 0xFF when request submitted, * upon successful request completion engine reset to completion status */ #define DPI_REQ_CDATA 0xFF +#define CNXK_DMA_POOL_MAX_CACHE_SZ (16) #define CNXK_DPI_DEV_CONFIG (1ULL << 0) #define CNXK_DPI_DEV_START (1ULL << 1) @@ -45,8 +48,13 @@ struct cnxk_dpi_conf { }; struct cnxk_dpi_vf_s { - struct roc_dpi rdpi; + uint64_t *chunk_base; + uint16_t chunk_head; + uint16_t chunk_size_m1; + struct rte_mempool *chunk_pool; struct cnxk_dpi_conf conf[MAX_VCHANS_PER_QUEUE]; + struct roc_dpi rdpi; + uint32_t aura; uint16_t num_vchans; uint16_t flag; } __plt_cache_aligned;