From patchwork Mon Aug 21 17:49:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130613 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 877C4430C3; Mon, 21 Aug 2023 19:49:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0FD2B40DF8; Mon, 21 Aug 2023 19:49:55 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7D94440DF5; Mon, 21 Aug 2023 19:49:53 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LCSFDI023342; Mon, 21 Aug 2023 10:49:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=lNsu0GD3KOlIZrhwD4EbBJYcV608UBZZNBG0t22MsK4=; b=IHsTTwq6rqrnDigadjjel/wKWFAf1D3LoIY8Z18R+YIKTjn2bi1hCXu/6wUrXZr3NqtW LYWu945ryLiWHm60ZLPeow0Xc/pf8MaEkLiR8oAk4znEZWo+R52BfPddPCGaStuQH7/g A4CmT1WxxiTEQRmmIjbO0ZIvTGuLHXpk0nHbKwdtg4sSSsDUMFYPAmouGL9OhytxTlDJ Su0dP4hSZz3xWJVbwvVqWDGiPm/Ukg2zSmAVnAT6SJHahFc1i6gmjhO77wUnJj42ql82 UisNOsNuaoAhZHETbg3oaSo8oMZaXjamu5o/Xgqj20oF6ghl0f3bw+92kag6j1B8dwbn rg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sjw8jdv6x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Aug 2023 10:49:51 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:49:49 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:49:49 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id A770A3F7081; Mon, 21 Aug 2023 10:49:46 -0700 (PDT) From: Amit Prakash Shukla To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , Amit Prakash Shukla , , Radha Mohan Chintakuntla Subject: [PATCH v4 1/8] common/cnxk: use unique name for DPI memzone Date: Mon, 21 Aug 2023 23:19:35 +0530 Message-ID: <20230821174942.3165191-1-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230818090159.2597468-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: lA6p9NdybOptCInIXtsHZgASN3cJKi59 X-Proofpoint-GUID: lA6p9NdybOptCInIXtsHZgASN3cJKi59 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org roc_dpi was using vfid as part of name for memzone allocation. This led to memzone allocation failure in case of multiple physical functions. vfid is not unique by itself since multiple physical functions can have the same virtual function indices. So use complete DBDF as part of memzone name to make it unique. Fixes: b6e395692b6d ("common/cnxk: add DPI DMA support") Cc: stable@dpdk.org Signed-off-by: Radha Mohan Chintakuntla Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/common/cnxk/roc_dpi.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/common/cnxk/roc_dpi.c b/drivers/common/cnxk/roc_dpi.c index 93c8318a3d..0e2f803077 100644 --- a/drivers/common/cnxk/roc_dpi.c +++ b/drivers/common/cnxk/roc_dpi.c @@ -81,10 +81,10 @@ roc_dpi_configure(struct roc_dpi *roc_dpi) return rc; } - snprintf(name, sizeof(name), "dpimem%d", roc_dpi->vfid); + snprintf(name, sizeof(name), "dpimem%d:%d:%d:%d", pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); buflen = DPI_CMD_QUEUE_SIZE * DPI_CMD_QUEUE_BUFS; - dpi_mz = plt_memzone_reserve_aligned(name, buflen, 0, - DPI_CMD_QUEUE_SIZE); + dpi_mz = plt_memzone_reserve_aligned(name, buflen, 0, DPI_CMD_QUEUE_SIZE); if (dpi_mz == NULL) { plt_err("dpi memzone reserve failed"); rc = -ENOMEM; From patchwork Mon Aug 21 17:49:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130614 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A975D430C3; Mon, 21 Aug 2023 19:50:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8727243245; Mon, 21 Aug 2023 19:49:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 2D76242D0C; Mon, 21 Aug 2023 19:49:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LCpN2n028653; Mon, 21 Aug 2023 10:49:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=H1l2jX9ngUxKMMlw3MbzFvru5VQ+6QYeHMZCdcOQYk0=; b=I44lP38TsCQSlIJAiJCIvqQlK+8wtL3axMa6MiPG/jCUBIbQHwXKVBYfxRAiGQ5xZlA2 Mt2AMlVMLYTL+Y14gmJVcG65Tk2Qtt5KuT8z6RN1tslo/9KG0VAiHnhZDMk5e2rBHgOC qSK1qxgyfyBSPqP8ihf8dyiHT3+VgIYZCOtA8b9sshdtXeiBB6SKq5VIytj4xKs6+/r8 deEHu7sSVIPoDnjIiV6GMPSsaUftw8use7Cf4pwvm30Q4kL5WAzuGIhwc73ZFoHvctw3 gmfTCTGB2JQrzNZ5UnDfxDFQuJ1IKI3CTijFyStLqr+qAdVkAsooCA0KRHEvPN5mqqFX pQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sjw8jdv7a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Aug 2023 10:49:56 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:49:54 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:49:54 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 178E73F7091; Mon, 21 Aug 2023 10:49:51 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Subject: [PATCH v4 2/8] dma/cnxk: changes for dmadev driver Date: Mon, 21 Aug 2023 23:19:36 +0530 Message-ID: <20230821174942.3165191-2-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: kx6QNJoMqQ7-CK2FOfypSUE63ucF1_87 X-Proofpoint-GUID: kx6QNJoMqQ7-CK2FOfypSUE63ucF1_87 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Dmadev driver changes to align with dpdk spec. Fixes: 681851b347ad ("dma/cnxk: support CN10K DMA engine") Cc: stable@dpdk.org Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 464 ++++++++++++++++++++------------- drivers/dma/cnxk/cnxk_dmadev.h | 24 +- 2 files changed, 294 insertions(+), 194 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index a6f4a31e0e..a0152fc6df 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -7,68 +7,76 @@ #include #include +#include +#include #include #include #include #include -#include -#include -#include #include static int -cnxk_dmadev_info_get(const struct rte_dma_dev *dev, - struct rte_dma_info *dev_info, uint32_t size) +cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size) { RTE_SET_USED(dev); RTE_SET_USED(size); dev_info->max_vchans = 1; dev_info->nb_vchans = 1; - dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | - RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | - RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | - RTE_DMA_CAPA_OPS_COPY_SG; + dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | + RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | + RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; dev_info->max_desc = DPI_MAX_DESC; - dev_info->min_desc = 1; + dev_info->min_desc = 2; dev_info->max_sges = DPI_MAX_POINTER; return 0; } static int -cnxk_dmadev_configure(struct rte_dma_dev *dev, - const struct rte_dma_conf *conf, uint32_t conf_sz) +cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = NULL; int rc = 0; RTE_SET_USED(conf); - RTE_SET_USED(conf); - RTE_SET_USED(conf_sz); RTE_SET_USED(conf_sz); + dpivf = dev->fp_obj->dev_private; + + if (dpivf->flag & CNXK_DPI_DEV_CONFIG) + return rc; + rc = roc_dpi_configure(&dpivf->rdpi); - if (rc < 0) + if (rc < 0) { plt_err("DMA configure failed err = %d", rc); + goto done; + } + dpivf->flag |= CNXK_DPI_DEV_CONFIG; + +done: return rc; } static int cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, - const struct rte_dma_vchan_conf *conf, - uint32_t conf_sz) + const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_compl_s *comp_data; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; + uint16_t max_desc; + uint32_t size; int i; RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); + if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + return 0; + header->cn9k.pt = DPI_HDR_PT_ZBW_CA; switch (conf->direction) { @@ -96,35 +104,54 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.fport = conf->dst_port.pcie.coreid; }; - for (i = 0; i < conf->nb_desc; i++) { - comp_data = rte_zmalloc(NULL, sizeof(*comp_data), 0); - if (comp_data == NULL) { - plt_err("Failed to allocate for comp_data"); - return -ENOMEM; - } - comp_data->cdata = DPI_REQ_CDATA; - dpivf->conf.c_desc.compl_ptr[i] = comp_data; - }; - dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; - dpivf->conf.c_desc.head = 0; - dpivf->conf.c_desc.tail = 0; + max_desc = conf->nb_desc; + if (!rte_is_power_of_2(max_desc)) + max_desc = rte_align32pow2(max_desc); + + if (max_desc > DPI_MAX_DESC) + max_desc = DPI_MAX_DESC; + + size = (max_desc * sizeof(struct cnxk_dpi_compl_s *)); + dpi_conf->c_desc.compl_ptr = rte_zmalloc(NULL, size, 0); + + if (dpi_conf->c_desc.compl_ptr == NULL) { + plt_err("Failed to allocate for comp_data"); + return -ENOMEM; + } + + for (i = 0; i < max_desc; i++) { + dpi_conf->c_desc.compl_ptr[i] = + rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; + } + + dpi_conf->c_desc.max_cnt = (max_desc - 1); + dpi_conf->c_desc.head = 0; + dpi_conf->c_desc.tail = 0; + dpivf->pnum_words = 0; + dpivf->pending = 0; + dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } static int cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, - const struct rte_dma_vchan_conf *conf, - uint32_t conf_sz) + const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_compl_s *comp_data; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; + uint16_t max_desc; + uint32_t size; int i; RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); + if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + return 0; + header->cn10k.pt = DPI_HDR_PT_ZBW_CA; switch (conf->direction) { @@ -152,18 +179,33 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.fport = conf->dst_port.pcie.coreid; }; - for (i = 0; i < conf->nb_desc; i++) { - comp_data = rte_zmalloc(NULL, sizeof(*comp_data), 0); - if (comp_data == NULL) { - plt_err("Failed to allocate for comp_data"); - return -ENOMEM; - } - comp_data->cdata = DPI_REQ_CDATA; - dpivf->conf.c_desc.compl_ptr[i] = comp_data; - }; - dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; - dpivf->conf.c_desc.head = 0; - dpivf->conf.c_desc.tail = 0; + max_desc = conf->nb_desc; + if (!rte_is_power_of_2(max_desc)) + max_desc = rte_align32pow2(max_desc); + + if (max_desc > DPI_MAX_DESC) + max_desc = DPI_MAX_DESC; + + size = (max_desc * sizeof(struct cnxk_dpi_compl_s *)); + dpi_conf->c_desc.compl_ptr = rte_zmalloc(NULL, size, 0); + + if (dpi_conf->c_desc.compl_ptr == NULL) { + plt_err("Failed to allocate for comp_data"); + return -ENOMEM; + } + + for (i = 0; i < max_desc; i++) { + dpi_conf->c_desc.compl_ptr[i] = + rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; + } + + dpi_conf->c_desc.max_cnt = (max_desc - 1); + dpi_conf->c_desc.head = 0; + dpi_conf->c_desc.tail = 0; + dpivf->pnum_words = 0; + dpivf->pending = 0; + dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -173,10 +215,16 @@ cnxk_dmadev_start(struct rte_dma_dev *dev) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + if (dpivf->flag & CNXK_DPI_DEV_START) + return 0; + dpivf->desc_idx = 0; - dpivf->num_words = 0; + dpivf->pending = 0; + dpivf->pnum_words = 0; roc_dpi_enable(&dpivf->rdpi); + dpivf->flag |= CNXK_DPI_DEV_START; + return 0; } @@ -187,6 +235,8 @@ cnxk_dmadev_stop(struct rte_dma_dev *dev) roc_dpi_disable(&dpivf->rdpi); + dpivf->flag &= ~CNXK_DPI_DEV_START; + return 0; } @@ -198,6 +248,8 @@ cnxk_dmadev_close(struct rte_dma_dev *dev) roc_dpi_disable(&dpivf->rdpi); roc_dpi_dev_fini(&dpivf->rdpi); + dpivf->flag = 0; + return 0; } @@ -206,8 +258,7 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) { uint64_t *ptr = dpi->chunk_base; - if ((cmd_count < DPI_MIN_CMD_SIZE) || (cmd_count > DPI_MAX_CMD_SIZE) || - cmds == NULL) + if ((cmd_count < DPI_MIN_CMD_SIZE) || (cmd_count > DPI_MAX_CMD_SIZE) || cmds == NULL) return -EINVAL; /* @@ -223,11 +274,15 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) int count; uint64_t *new_buff = dpi->chunk_next; - dpi->chunk_next = - (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); + dpi->chunk_next = (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); if (!dpi->chunk_next) { - plt_err("Failed to alloc next buffer from NPA"); - return -ENOMEM; + plt_dp_dbg("Failed to alloc next buffer from NPA"); + + /* NPA failed to allocate a buffer. Restoring chunk_next + * to its original address. + */ + dpi->chunk_next = new_buff; + return -ENOSPC; } /* @@ -261,13 +316,17 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) /* queue index may be greater than pool size */ if (dpi->chunk_head >= dpi->pool_size_m1) { new_buff = dpi->chunk_next; - dpi->chunk_next = - (void *)roc_npa_aura_op_alloc(dpi->aura_handle, - 0); + dpi->chunk_next = (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); if (!dpi->chunk_next) { - plt_err("Failed to alloc next buffer from NPA"); - return -ENOMEM; + plt_dp_dbg("Failed to alloc next buffer from NPA"); + + /* NPA failed to allocate a buffer. Restoring chunk_next + * to its original address. + */ + dpi->chunk_next = new_buff; + return -ENOSPC; } + /* Write next buffer address */ *ptr = (uint64_t)new_buff; dpi->chunk_base = new_buff; @@ -279,12 +338,13 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) } static int -cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, - rte_iova_t dst, uint32_t length, uint64_t flags) +cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length, + uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; @@ -292,9 +352,8 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -311,103 +370,110 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, lptr = dst; } - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; /* word3 is always 0 */ num_words += 4; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = fptr; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = lptr; - - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; - } - dpivf->num_words += num_words; + cmd[num_words++] = length; + cmd[num_words++] = fptr; + cmd[num_words++] = length; + cmd[num_words++] = lptr; + + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; } - return dpivf->desc_idx++; + rte_wmb(); + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted++; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; + } + + return (dpivf->desc_idx++); } static int -cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, - const struct rte_dma_sge *src, - const struct rte_dma_sge *dst, - uint16_t nb_src, uint16_t nb_dst, uint64_t flags) +cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src, + const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); /* * For inbound case, src pointers are last pointers. * For all other cases, src pointers are first pointers. */ if (header->cn9k.xtype == DPI_XTYPE_INBOUND) { - header->cn9k.nfst = nb_dst & 0xf; - header->cn9k.nlst = nb_src & 0xf; + header->cn9k.nfst = nb_dst & DPI_MAX_POINTER; + header->cn9k.nlst = nb_src & DPI_MAX_POINTER; fptr = &dst[0]; lptr = &src[0]; } else { - header->cn9k.nfst = nb_src & 0xf; - header->cn9k.nlst = nb_dst & 0xf; + header->cn9k.nfst = nb_src & DPI_MAX_POINTER; + header->cn9k.nlst = nb_dst & DPI_MAX_POINTER; fptr = &src[0]; lptr = &dst[0]; } - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; num_words += 4; for (i = 0; i < header->cn9k.nfst; i++) { - dpivf->cmd[num_words++] = (uint64_t)fptr->length; - dpivf->cmd[num_words++] = fptr->addr; + cmd[num_words++] = (uint64_t)fptr->length; + cmd[num_words++] = fptr->addr; fptr++; } for (i = 0; i < header->cn9k.nlst; i++) { - dpivf->cmd[num_words++] = (uint64_t)lptr->length; - dpivf->cmd[num_words++] = lptr->addr; + cmd[num_words++] = (uint64_t)lptr->length; + cmd[num_words++] = lptr->addr; lptr++; } - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; - } - dpivf->num_words += num_words; + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; } - return (rc < 0) ? rc : dpivf->desc_idx++; + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted += nb_src; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; + } + + return (dpivf->desc_idx++); } static int -cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, - rte_iova_t dst, uint32_t length, uint64_t flags) +cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; @@ -415,9 +481,8 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -425,131 +490,140 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, fptr = src; lptr = dst; - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; /* word3 is always 0 */ num_words += 4; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = fptr; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = lptr; - - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; - } - dpivf->num_words += num_words; + cmd[num_words++] = length; + cmd[num_words++] = fptr; + cmd[num_words++] = length; + cmd[num_words++] = lptr; + + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; + } + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted++; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; } return dpivf->desc_idx++; } static int -cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, - const struct rte_dma_sge *src, - const struct rte_dma_sge *dst, uint16_t nb_src, - uint16_t nb_dst, uint64_t flags) +cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src, + const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, + uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); - header->cn10k.nfst = nb_src & 0xf; - header->cn10k.nlst = nb_dst & 0xf; + header->cn10k.nfst = nb_src & DPI_MAX_POINTER; + header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; fptr = &src[0]; lptr = &dst[0]; - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; num_words += 4; for (i = 0; i < header->cn10k.nfst; i++) { - dpivf->cmd[num_words++] = (uint64_t)fptr->length; - dpivf->cmd[num_words++] = fptr->addr; + cmd[num_words++] = (uint64_t)fptr->length; + cmd[num_words++] = fptr->addr; fptr++; } for (i = 0; i < header->cn10k.nlst; i++) { - dpivf->cmd[num_words++] = (uint64_t)lptr->length; - dpivf->cmd[num_words++] = lptr->addr; + cmd[num_words++] = (uint64_t)lptr->length; + cmd[num_words++] = lptr->addr; lptr++; } - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; - } - dpivf->num_words += num_words; + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; + } + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted += nb_src; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; } - return (rc < 0) ? rc : dpivf->desc_idx++; + return (dpivf->desc_idx++); } static uint16_t -cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, - uint16_t *last_idx, bool *has_error) +cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx, + bool *has_error) { struct cnxk_dpi_vf_s *dpivf = dev_private; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_compl_s *comp_ptr; int cnt; RTE_SET_USED(vchan); - if (dpivf->stats.submitted == dpivf->stats.completed) - return 0; - for (cnt = 0; cnt < nb_cpls; cnt++) { - struct cnxk_dpi_compl_s *comp_ptr = - dpivf->conf.c_desc.compl_ptr[cnt]; + comp_ptr = c_desc->compl_ptr[c_desc->head]; if (comp_ptr->cdata) { if (comp_ptr->cdata == DPI_REQ_CDATA) break; *has_error = 1; dpivf->stats.errors++; + STRM_INC(*c_desc, head); break; } + + comp_ptr->cdata = DPI_REQ_CDATA; + STRM_INC(*c_desc, head); } - *last_idx = cnt - 1; - dpivf->conf.c_desc.tail = cnt; dpivf->stats.completed += cnt; + *last_idx = dpivf->stats.completed - 1; return cnt; } static uint16_t -cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, - const uint16_t nb_cpls, uint16_t *last_idx, - enum rte_dma_status_code *status) +cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status) { struct cnxk_dpi_vf_s *dpivf = dev_private; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_compl_s *comp_ptr; int cnt; RTE_SET_USED(vchan); RTE_SET_USED(last_idx); + for (cnt = 0; cnt < nb_cpls; cnt++) { - struct cnxk_dpi_compl_s *comp_ptr = - dpivf->conf.c_desc.compl_ptr[cnt]; + comp_ptr = c_desc->compl_ptr[c_desc->head]; status[cnt] = comp_ptr->cdata; if (status[cnt]) { if (status[cnt] == DPI_REQ_CDATA) @@ -557,30 +631,52 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, dpivf->stats.errors++; } + comp_ptr->cdata = DPI_REQ_CDATA; + STRM_INC(*c_desc, head); } - *last_idx = cnt - 1; - dpivf->conf.c_desc.tail = 0; dpivf->stats.completed += cnt; + *last_idx = dpivf->stats.completed - 1; return cnt; } +static uint16_t +cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) +{ + const struct cnxk_dpi_vf_s *dpivf = (const struct cnxk_dpi_vf_s *)dev_private; + uint16_t burst_cap; + + RTE_SET_USED(vchan); + + burst_cap = dpivf->conf.c_desc.max_cnt - + ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; + + return burst_cap; +} + static int cnxk_dmadev_submit(void *dev_private, uint16_t vchan __rte_unused) { struct cnxk_dpi_vf_s *dpivf = dev_private; + uint32_t num_words = dpivf->pnum_words; + + if (!dpivf->pnum_words) + return 0; rte_wmb(); - plt_write64(dpivf->num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + + dpivf->stats.submitted += dpivf->pending; + dpivf->pnum_words = 0; + dpivf->pending = 0; return 0; } static int -cnxk_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, - struct rte_dma_stats *rte_stats, uint32_t size) +cnxk_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats, + uint32_t size) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; struct rte_dma_stats *stats = &dpivf->stats; @@ -628,8 +724,7 @@ static const struct rte_dma_dev_ops cnxk_dmadev_ops = { }; static int -cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) +cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { struct cnxk_dpi_vf_s *dpivf = NULL; char name[RTE_DEV_NAME_MAX_LEN]; @@ -648,8 +743,7 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, memset(name, 0, sizeof(name)); rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); - dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, - sizeof(*dpivf)); + dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*dpivf)); if (dmadev == NULL) { plt_err("dma device allocation failed for %s", name); return -ENOMEM; @@ -666,6 +760,7 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, dmadev->fp_obj->submit = cnxk_dmadev_submit; dmadev->fp_obj->completed = cnxk_dmadev_completed; dmadev->fp_obj->completed_status = cnxk_dmadev_completed_status; + dmadev->fp_obj->burst_capacity = cnxk_damdev_burst_capacity; if (pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KA || pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CNF10KA || @@ -682,6 +777,8 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, if (rc < 0) goto err_out_free; + dmadev->state = RTE_DMA_DEV_READY; + return 0; err_out_free: @@ -703,20 +800,17 @@ cnxk_dmadev_remove(struct rte_pci_device *pci_dev) } static const struct rte_pci_id cnxk_dma_pci_map[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, - PCI_DEVID_CNXK_DPI_VF) - }, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_DPI_VF)}, { .vendor_id = 0, }, }; static struct rte_pci_driver cnxk_dmadev = { - .id_table = cnxk_dma_pci_map, + .id_table = cnxk_dma_pci_map, .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA, - .probe = cnxk_dmadev_probe, - .remove = cnxk_dmadev_remove, + .probe = cnxk_dmadev_probe, + .remove = cnxk_dmadev_remove, }; RTE_PMD_REGISTER_PCI(cnxk_dmadev_pci_driver, cnxk_dmadev); diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index e1f5694f50..9563295af0 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -4,16 +4,21 @@ #ifndef CNXK_DMADEV_H #define CNXK_DMADEV_H -#define DPI_MAX_POINTER 15 -#define DPI_QUEUE_STOP 0x0 -#define DPI_QUEUE_START 0x1 -#define STRM_INC(s) ((s).tail = ((s).tail + 1) % (s).max_cnt) -#define DPI_MAX_DESC 1024 +#include + +#define DPI_MAX_POINTER 15 +#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) +#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) +#define DPI_MAX_DESC 1024 /* Set Completion data to 0xFF when request submitted, * upon successful request completion engine reset to completion status */ -#define DPI_REQ_CDATA 0xFF +#define DPI_REQ_CDATA 0xFF + +#define CNXK_DPI_DEV_CONFIG (1ULL << 0) +#define CNXK_DPI_VCHAN_CONFIG (1ULL << 1) +#define CNXK_DPI_DEV_START (1ULL << 2) struct cnxk_dpi_compl_s { uint64_t cdata; @@ -21,7 +26,7 @@ struct cnxk_dpi_compl_s { }; struct cnxk_dpi_cdesc_data_s { - struct cnxk_dpi_compl_s *compl_ptr[DPI_MAX_DESC]; + struct cnxk_dpi_compl_s **compl_ptr; uint16_t max_cnt; uint16_t head; uint16_t tail; @@ -36,9 +41,10 @@ struct cnxk_dpi_vf_s { struct roc_dpi rdpi; struct cnxk_dpi_conf conf; struct rte_dma_stats stats; - uint64_t cmd[DPI_MAX_CMD_SIZE]; - uint32_t num_words; + uint16_t pending; + uint16_t pnum_words; uint16_t desc_idx; + uint16_t flag; }; #endif From patchwork Mon Aug 21 17:49:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130615 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6654D430C3; Mon, 21 Aug 2023 19:50:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2BCA34325B; Mon, 21 Aug 2023 19:50:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5D0FE40DF5 for ; Mon, 21 Aug 2023 19:50:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LCn1Rg007858 for ; Mon, 21 Aug 2023 10:49:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=PfYUSwSFi2Yo++fE8bVuhTxVu3QGPkfHLE/XW7zcVjk=; b=WSVZkLlQXWTbekc/1fvkqOhHw0Wu29xdGZtJ6mHQugkAd7d0fbACt4KhrdXJpFIGTtro JJ9uS/bcdFRUkd7WONtrmCoWZRchGkeBAm0k6a3elmD2FxLD34s+P0HbLPzhwlBlkc/y Y077svrkEUnswG5034bcvM82idlYIEaP4LTvzqZAoPLpVeT76uSLQuuQjTRWAP8UNovz 3g1mcjxGScyJM0mfNTh0r4gpOHfNH0hJ4Atu5FbgZt+3UEwDciRV6iavcaWnPqTqmy7K 841IoegsMPD/pU1dgPe4il6RZgrJ+xPUI00haC9Z8x5WwnkKzlof9XWQnCtN76V6Xz3i Qw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sjw8jdv7m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 21 Aug 2023 10:49:59 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:49:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:49:57 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id AC6B23F7081; Mon, 21 Aug 2023 10:49:55 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v4 3/8] dma/cnxk: add DMA devops for all models of cn10xxx Date: Mon, 21 Aug 2023 23:19:37 +0530 Message-ID: <20230821174942.3165191-3-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: kaQZH1S_lFRSq8pCVC1FyKyCE5ITjGrp X-Proofpoint-GUID: kaQZH1S_lFRSq8pCVC1FyKyCE5ITjGrp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Valid function pointers are set for DMA device operations i.e. cn10k_dmadev_ops are used for all cn10k devices. Signed-off-by: Radha Mohan Chintakuntla Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index a0152fc6df..1dc124e68f 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -763,7 +763,9 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_de dmadev->fp_obj->burst_capacity = cnxk_damdev_burst_capacity; if (pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KA || + pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KAS || pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CNF10KA || + pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CNF10KB || pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KB) { dmadev->dev_ops = &cn10k_dmadev_ops; dmadev->fp_obj->copy = cn10k_dmadev_copy; From patchwork Mon Aug 21 17:49:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130616 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D8B71430C3; Mon, 21 Aug 2023 19:50:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 58A7E427E9; Mon, 21 Aug 2023 19:50:08 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1582040DF5 for ; Mon, 21 Aug 2023 19:50:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LDAPZk006156 for ; Mon, 21 Aug 2023 10:50:05 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=upYc8GfBK3OLl5SE7Z5vOPpLXdVYB+DbSOdr5YUEj80=; b=LSJSWrbMWeNjuTKLKMtYme9seuKC+lhglC5xVpDg6UZvdA0oXW7RqJcIENmUT9QIP9hT +bUJ2nN62BT8kicDJoOMWtDz27ZjDDs4L3zeTgUaVNs4EBo0t+XhDs9gRp1PA7IAK54l n1db1g3CYGd+6BL20rUa31LZGbBX8PKuXEBNo9mA6kQvWhEoRZ/Oz/qEmFUl2tP2EZAt 9p4dzbaTjP0mo3SrOOlEbs/WLz2B8/xZ6OWAYtSKbNKz+jZJBVYY11NhrTwldd7ueIcX I3P6tczQrqJ2voVLhTx+cfCRwSvrxDDKRjyy50Gv6S7CnVnKyTAxFmCLCHbVQ3DPaS+d 5Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sjw8jdv81-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 21 Aug 2023 10:50:05 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:50:01 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:50:01 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 290413F7081; Mon, 21 Aug 2023 10:49:58 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v4 4/8] dma/cnxk: update func field based on transfer type Date: Mon, 21 Aug 2023 23:19:38 +0530 Message-ID: <20230821174942.3165191-4-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: xJkgSwUuuEXY5CcIKwhykLK0BFRd3XXS X-Proofpoint-GUID: xJkgSwUuuEXY5CcIKwhykLK0BFRd3XXS X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use pfid and vfid of src_port for incoming DMA transfers and dst_port for outgoing DMA transfers. Signed-off-by: Radha Mohan Chintakuntla Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 1dc124e68f..d8cfb98cd7 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -84,13 +84,21 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.xtype = DPI_XTYPE_INBOUND; header->cn9k.lport = conf->src_port.pcie.coreid; header->cn9k.fport = 0; - header->cn9k.pvfe = 1; + header->cn9k.pvfe = conf->src_port.pcie.vfen; + if (header->cn9k.pvfe) { + header->cn9k.func = conf->src_port.pcie.pfid << 12; + header->cn9k.func |= conf->src_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_DEV: header->cn9k.xtype = DPI_XTYPE_OUTBOUND; header->cn9k.lport = 0; header->cn9k.fport = conf->dst_port.pcie.coreid; - header->cn9k.pvfe = 1; + header->cn9k.pvfe = conf->dst_port.pcie.vfen; + if (header->cn9k.pvfe) { + header->cn9k.func = conf->dst_port.pcie.pfid << 12; + header->cn9k.func |= conf->dst_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_MEM: header->cn9k.xtype = DPI_XTYPE_INTERNAL_ONLY; @@ -102,6 +110,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.xtype = DPI_XTYPE_EXTERNAL_ONLY; header->cn9k.lport = conf->src_port.pcie.coreid; header->cn9k.fport = conf->dst_port.pcie.coreid; + header->cn9k.pvfe = 0; }; max_desc = conf->nb_desc; @@ -159,13 +168,21 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.xtype = DPI_XTYPE_INBOUND; header->cn10k.lport = conf->src_port.pcie.coreid; header->cn10k.fport = 0; - header->cn10k.pvfe = 1; + header->cn10k.pvfe = conf->src_port.pcie.vfen; + if (header->cn10k.pvfe) { + header->cn10k.func = conf->src_port.pcie.pfid << 12; + header->cn10k.func |= conf->src_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_DEV: header->cn10k.xtype = DPI_XTYPE_OUTBOUND; header->cn10k.lport = 0; header->cn10k.fport = conf->dst_port.pcie.coreid; - header->cn10k.pvfe = 1; + header->cn10k.pvfe = conf->dst_port.pcie.vfen; + if (header->cn10k.pvfe) { + header->cn10k.func = conf->dst_port.pcie.pfid << 12; + header->cn10k.func |= conf->dst_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_MEM: header->cn10k.xtype = DPI_XTYPE_INTERNAL_ONLY; @@ -177,6 +194,7 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.xtype = DPI_XTYPE_EXTERNAL_ONLY; header->cn10k.lport = conf->src_port.pcie.coreid; header->cn10k.fport = conf->dst_port.pcie.coreid; + header->cn10k.pvfe = 0; }; max_desc = conf->nb_desc; From patchwork Mon Aug 21 17:49:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130617 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B86DF430C3; Mon, 21 Aug 2023 19:50:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 872DE43260; Mon, 21 Aug 2023 19:50:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A89C940DF5 for ; Mon, 21 Aug 2023 19:50:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LDAPZl006156 for ; Mon, 21 Aug 2023 10:50:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=7zGmW/fcg9/rNB0ZPx0WZM+KaT1dbm1ShfvD1wjp5lE=; b=KGjim0wV9UHFg/B9eZYy8u6eRQh8L6MRXsDP8ag1t0IR0POp7XdEDmjq472ZgZTPTeSO U6yXzz/DVpfq+8kwhfte5oOPhnTb2Xn6NfzmFeP6RhSmsODVkWyToVQceI0Br6Eqnpab rBRIs9/DSb2UKTXpxSyJ/kiiCUEPTh1BqdWY2qRHUSeyDz4vWLglc+bl5UECZgY6VEn5 9s+go7wJ3uEI9K+Mgt3WMjNTh+Q8fR1bc05t2vV0iYPHPlVWbIksLWSK5UuB2BjMT0q3 qXFIxdTD+IcxTDqDh+K393G0J4sUpTZu7cnxnrrlfUC9eBuTbW9MZNoLaoApp5Mk/47R Dg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sjw8jdv81-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 21 Aug 2023 10:50:05 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:50:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:50:04 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id AB4C13F70A5; Mon, 21 Aug 2023 10:50:02 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v4 5/8] dma/cnxk: increase vchan per queue to max 4 Date: Mon, 21 Aug 2023 23:19:39 +0530 Message-ID: <20230821174942.3165191-5-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jvXdSc9TMYvQ9Mi8yMFH-gUB1mAcPqtx X-Proofpoint-GUID: jvXdSc9TMYvQ9Mi8yMFH-gUB1mAcPqtx X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To support multiple directions in same queue make use of multiple vchan per queue. Each vchan can be configured in some direction and used. Signed-off-by: Amit Prakash Shukla Signed-off-by: Radha Mohan Chintakuntla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 68 +++++++++++++++------------------- drivers/dma/cnxk/cnxk_dmadev.h | 11 +++--- 2 files changed, 36 insertions(+), 43 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index d8cfb98cd7..7d83b70e8b 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -22,8 +22,8 @@ cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_inf RTE_SET_USED(dev); RTE_SET_USED(size); - dev_info->max_vchans = 1; - dev_info->nb_vchans = 1; + dev_info->max_vchans = MAX_VCHANS_PER_QUEUE; + dev_info->nb_vchans = MAX_VCHANS_PER_QUEUE; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; @@ -65,13 +65,12 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; union dpi_instr_hdr_s *header = &dpi_conf->hdr; uint16_t max_desc; uint32_t size; int i; - RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) @@ -149,13 +148,12 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; union dpi_instr_hdr_s *header = &dpi_conf->hdr; uint16_t max_desc; uint32_t size; int i; - RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) @@ -360,18 +358,17 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -400,7 +397,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -421,18 +418,17 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); /* * For inbound case, src pointers are last pointers. @@ -468,7 +464,7 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -489,18 +485,17 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t uint32_t length, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -520,7 +515,7 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -542,18 +537,17 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn10k.nfst = nb_src & DPI_MAX_POINTER; header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; @@ -579,7 +573,7 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -600,12 +594,11 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, bool *has_error) { struct cnxk_dpi_vf_s *dpivf = dev_private; - struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpi_conf->c_desc; struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(vchan); - for (cnt = 0; cnt < nb_cpls; cnt++) { comp_ptr = c_desc->compl_ptr[c_desc->head]; @@ -633,11 +626,11 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n uint16_t *last_idx, enum rte_dma_status_code *status) { struct cnxk_dpi_vf_s *dpivf = dev_private; - struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpi_conf->c_desc; struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(vchan); RTE_SET_USED(last_idx); for (cnt = 0; cnt < nb_cpls; cnt++) { @@ -663,11 +656,10 @@ static uint16_t cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) { const struct cnxk_dpi_vf_s *dpivf = (const struct cnxk_dpi_vf_s *)dev_private; + const struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; uint16_t burst_cap; - RTE_SET_USED(vchan); - - burst_cap = dpivf->conf.c_desc.max_cnt - + burst_cap = dpi_conf->c_desc.max_cnt - ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; return burst_cap; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 9563295af0..4693960a19 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -6,10 +6,11 @@ #include -#define DPI_MAX_POINTER 15 -#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) -#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) -#define DPI_MAX_DESC 1024 +#define DPI_MAX_POINTER 15 +#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) +#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) +#define DPI_MAX_DESC 1024 +#define MAX_VCHANS_PER_QUEUE 4 /* Set Completion data to 0xFF when request submitted, * upon successful request completion engine reset to completion status @@ -39,7 +40,7 @@ struct cnxk_dpi_conf { struct cnxk_dpi_vf_s { struct roc_dpi rdpi; - struct cnxk_dpi_conf conf; + struct cnxk_dpi_conf conf[MAX_VCHANS_PER_QUEUE]; struct rte_dma_stats stats; uint16_t pending; uint16_t pnum_words; From patchwork Mon Aug 21 17:49:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130618 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AACD5430C3; Mon, 21 Aug 2023 19:50:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B18D34323A; Mon, 21 Aug 2023 19:50:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id CF2DC43268 for ; Mon, 21 Aug 2023 19:50:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LDPf8m007595 for ; Mon, 21 Aug 2023 10:50:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Nzdq4C5T2G2sXazA1K2UQ7u1G4kSONIZAqx5VkzO5jY=; b=W2Dcx6Xu8zMMl57EH2APqbhhDyTACJumTwo+XFKZG6GUuJ8av35AY98kfY2EhJ+iOAQa S6s71gighruN3zkB/vZGQ14BCXxHqPkOY7HuvYP56OWJeO1L8E33xMkFfFNrGFjotk/q bq2khp1YdoAtrcuxaTypYLAJMGXrOo1lO2e+pI68VJSPpSN3lzUb9mUdWarNlMgqJ5zC IVLCPRYsY1pwNP+gI7Y3a+XxnOumlArZub42nP9bQYdM4jQ6kmgBGQmbpMgSegbzSjWI bm+ABMD9d1WqK61hjqt9okE1EZFhEjs4BSY3aRvKWhjqJIcgBbIt1QbOgWn9RMbr4dSU 5Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sju3qp3ar-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 21 Aug 2023 10:50:09 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:50:08 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:50:08 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 6FF705B692F; Mon, 21 Aug 2023 10:50:06 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla Subject: [PATCH v4 6/8] dma/cnxk: vchan support enhancement Date: Mon, 21 Aug 2023 23:19:40 +0530 Message-ID: <20230821174942.3165191-6-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 30XqSdoNz9JOEuHZX595XqiVr3O8YsBq X-Proofpoint-ORIG-GUID: 30XqSdoNz9JOEuHZX595XqiVr3O8YsBq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Code changes to realign dpi private structure based on vchan. Changeset also resets DMA dev stats while starting dma device. Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 210 ++++++++++++++++++++++++--------- drivers/dma/cnxk/cnxk_dmadev.h | 18 +-- 2 files changed, 165 insertions(+), 63 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 7d83b70e8b..9fb3bb264a 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -16,35 +16,79 @@ #include +static int cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan); + static int cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size) { - RTE_SET_USED(dev); + struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; RTE_SET_USED(size); dev_info->max_vchans = MAX_VCHANS_PER_QUEUE; - dev_info->nb_vchans = MAX_VCHANS_PER_QUEUE; + dev_info->nb_vchans = dpivf->num_vchans; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; dev_info->max_desc = DPI_MAX_DESC; - dev_info->min_desc = 2; + dev_info->min_desc = DPI_MIN_DESC; dev_info->max_sges = DPI_MAX_POINTER; return 0; } +static int +cnxk_dmadev_vchan_free(struct cnxk_dpi_vf_s *dpivf, uint16_t vchan) +{ + struct cnxk_dpi_conf *dpi_conf; + uint16_t num_vchans; + uint16_t max_desc; + int i, j; + + if (vchan == RTE_DMA_ALL_VCHAN) { + num_vchans = dpivf->num_vchans; + i = 0; + } else { + if (vchan >= MAX_VCHANS_PER_QUEUE) + return -EINVAL; + + num_vchans = vchan + 1; + i = vchan; + } + + for (; i < num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + max_desc = dpi_conf->c_desc.max_cnt; + if (dpi_conf->c_desc.compl_ptr) { + for (j = 0; j < max_desc; j++) + rte_free(dpi_conf->c_desc.compl_ptr[j]); + } + + rte_free(dpi_conf->c_desc.compl_ptr); + dpi_conf->c_desc.compl_ptr = NULL; + } + + return 0; +} + static int cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = NULL; int rc = 0; - RTE_SET_USED(conf); RTE_SET_USED(conf_sz); dpivf = dev->fp_obj->dev_private; + /* Accept only number of vchans as config from application. */ + if (!(dpivf->flag & CNXK_DPI_DEV_START)) { + /* After config function, vchan setup function has to be called. + * Free up vchan memory if any, before configuring num_vchans. + */ + cnxk_dmadev_vchan_free(dpivf, RTE_DMA_ALL_VCHAN); + dpivf->num_vchans = conf->nb_vchans; + } + if (dpivf->flag & CNXK_DPI_DEV_CONFIG) return rc; @@ -73,7 +117,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, RTE_SET_USED(conf_sz); - if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + if (dpivf->flag & CNXK_DPI_DEV_START) return 0; header->cn9k.pt = DPI_HDR_PT_ZBW_CA; @@ -112,6 +156,9 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.pvfe = 0; }; + /* Free up descriptor memory before allocating. */ + cnxk_dmadev_vchan_free(dpivf, vchan); + max_desc = conf->nb_desc; if (!rte_is_power_of_2(max_desc)) max_desc = rte_align32pow2(max_desc); @@ -130,15 +177,15 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, for (i = 0; i < max_desc; i++) { dpi_conf->c_desc.compl_ptr[i] = rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + if (!dpi_conf->c_desc.compl_ptr[i]) { + plt_err("Failed to allocate for descriptor memory"); + return -ENOMEM; + } + dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; } dpi_conf->c_desc.max_cnt = (max_desc - 1); - dpi_conf->c_desc.head = 0; - dpi_conf->c_desc.tail = 0; - dpivf->pnum_words = 0; - dpivf->pending = 0; - dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -156,7 +203,7 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, RTE_SET_USED(conf_sz); - if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + if (dpivf->flag & CNXK_DPI_DEV_START) return 0; header->cn10k.pt = DPI_HDR_PT_ZBW_CA; @@ -195,6 +242,9 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.pvfe = 0; }; + /* Free up descriptor memory before allocating. */ + cnxk_dmadev_vchan_free(dpivf, vchan); + max_desc = conf->nb_desc; if (!rte_is_power_of_2(max_desc)) max_desc = rte_align32pow2(max_desc); @@ -213,15 +263,14 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, for (i = 0; i < max_desc; i++) { dpi_conf->c_desc.compl_ptr[i] = rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + if (!dpi_conf->c_desc.compl_ptr[i]) { + plt_err("Failed to allocate for descriptor memory"); + return -ENOMEM; + } dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; } dpi_conf->c_desc.max_cnt = (max_desc - 1); - dpi_conf->c_desc.head = 0; - dpi_conf->c_desc.tail = 0; - dpivf->pnum_words = 0; - dpivf->pending = 0; - dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -230,13 +279,27 @@ static int cnxk_dmadev_start(struct rte_dma_dev *dev) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + struct cnxk_dpi_conf *dpi_conf; + int i, j; if (dpivf->flag & CNXK_DPI_DEV_START) return 0; - dpivf->desc_idx = 0; - dpivf->pending = 0; - dpivf->pnum_words = 0; + for (i = 0; i < dpivf->num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + dpi_conf->c_desc.head = 0; + dpi_conf->c_desc.tail = 0; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; + dpi_conf->desc_idx = 0; + for (j = 0; j < dpi_conf->c_desc.max_cnt; j++) { + if (dpi_conf->c_desc.compl_ptr[j]) + dpi_conf->c_desc.compl_ptr[j]->cdata = DPI_REQ_CDATA; + } + + cnxk_stats_reset(dev, i); + } + roc_dpi_enable(&dpivf->rdpi); dpivf->flag |= CNXK_DPI_DEV_START; @@ -250,7 +313,6 @@ cnxk_dmadev_stop(struct rte_dma_dev *dev) struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; roc_dpi_disable(&dpivf->rdpi); - dpivf->flag &= ~CNXK_DPI_DEV_START; return 0; @@ -262,8 +324,10 @@ cnxk_dmadev_close(struct rte_dma_dev *dev) struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; roc_dpi_disable(&dpivf->rdpi); + cnxk_dmadev_vchan_free(dpivf, RTE_DMA_ALL_VCHAN); roc_dpi_dev_fini(&dpivf->rdpi); + /* Clear all flags as we close the device. */ dpivf->flag = 0; return 0; @@ -404,13 +468,13 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d rte_wmb(); if (flags & RTE_DMA_OP_FLAG_SUBMIT) { plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; + dpi_conf->stats.submitted++; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return (dpivf->desc_idx++); + return (dpi_conf->desc_idx++); } static int @@ -471,13 +535,13 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; + dpi_conf->stats.submitted += nb_src; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return (dpivf->desc_idx++); + return (dpi_conf->desc_idx++); } static int @@ -522,13 +586,13 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; + dpi_conf->stats.submitted++; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return dpivf->desc_idx++; + return dpi_conf->desc_idx++; } static int @@ -580,13 +644,13 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; + dpi_conf->stats.submitted += nb_src; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return (dpivf->desc_idx++); + return (dpi_conf->desc_idx++); } static uint16_t @@ -606,7 +670,7 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, if (comp_ptr->cdata == DPI_REQ_CDATA) break; *has_error = 1; - dpivf->stats.errors++; + dpi_conf->stats.errors++; STRM_INC(*c_desc, head); break; } @@ -615,8 +679,8 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, STRM_INC(*c_desc, head); } - dpivf->stats.completed += cnt; - *last_idx = dpivf->stats.completed - 1; + dpi_conf->stats.completed += cnt; + *last_idx = dpi_conf->stats.completed - 1; return cnt; } @@ -640,14 +704,14 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n if (status[cnt] == DPI_REQ_CDATA) break; - dpivf->stats.errors++; + dpi_conf->stats.errors++; } comp_ptr->cdata = DPI_REQ_CDATA; STRM_INC(*c_desc, head); } - dpivf->stats.completed += cnt; - *last_idx = dpivf->stats.completed - 1; + dpi_conf->stats.completed += cnt; + *last_idx = dpi_conf->stats.completed - 1; return cnt; } @@ -660,26 +724,28 @@ cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) uint16_t burst_cap; burst_cap = dpi_conf->c_desc.max_cnt - - ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; + ((dpi_conf->stats.submitted - dpi_conf->stats.completed) + dpi_conf->pending) + + 1; return burst_cap; } static int -cnxk_dmadev_submit(void *dev_private, uint16_t vchan __rte_unused) +cnxk_dmadev_submit(void *dev_private, uint16_t vchan) { struct cnxk_dpi_vf_s *dpivf = dev_private; - uint32_t num_words = dpivf->pnum_words; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + uint32_t num_words = dpi_conf->pnum_words; - if (!dpivf->pnum_words) + if (!dpi_conf->pnum_words) return 0; rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += dpivf->pending; - dpivf->pnum_words = 0; - dpivf->pending = 0; + dpi_conf->stats.submitted += dpi_conf->pending; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; return 0; } @@ -689,25 +755,59 @@ cnxk_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_sta uint32_t size) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct rte_dma_stats *stats = &dpivf->stats; - - RTE_SET_USED(vchan); + struct cnxk_dpi_conf *dpi_conf; + int i; if (size < sizeof(rte_stats)) return -EINVAL; if (rte_stats == NULL) return -EINVAL; - *rte_stats = *stats; + /* Stats of all vchans requested. */ + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dpivf->num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + rte_stats->submitted += dpi_conf->stats.submitted; + rte_stats->completed += dpi_conf->stats.completed; + rte_stats->errors += dpi_conf->stats.errors; + } + + goto done; + } + + if (vchan >= MAX_VCHANS_PER_QUEUE) + return -EINVAL; + + dpi_conf = &dpivf->conf[vchan]; + *rte_stats = dpi_conf->stats; + +done: return 0; } static int -cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan __rte_unused) +cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + struct cnxk_dpi_conf *dpi_conf; + int i; + + /* clear stats of all vchans. */ + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dpivf->num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + dpi_conf->stats = (struct rte_dma_stats){0}; + } + + return 0; + } + + if (vchan >= MAX_VCHANS_PER_QUEUE) + return -EINVAL; + + dpi_conf = &dpivf->conf[vchan]; + dpi_conf->stats = (struct rte_dma_stats){0}; - dpivf->stats = (struct rte_dma_stats){0}; return 0; } diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 4693960a19..f375143b16 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -10,6 +10,7 @@ #define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) #define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) #define DPI_MAX_DESC 1024 +#define DPI_MIN_DESC 2 #define MAX_VCHANS_PER_QUEUE 4 /* Set Completion data to 0xFF when request submitted, @@ -17,9 +18,8 @@ */ #define DPI_REQ_CDATA 0xFF -#define CNXK_DPI_DEV_CONFIG (1ULL << 0) -#define CNXK_DPI_VCHAN_CONFIG (1ULL << 1) -#define CNXK_DPI_DEV_START (1ULL << 2) +#define CNXK_DPI_DEV_CONFIG (1ULL << 0) +#define CNXK_DPI_DEV_START (1ULL << 1) struct cnxk_dpi_compl_s { uint64_t cdata; @@ -36,16 +36,18 @@ struct cnxk_dpi_cdesc_data_s { struct cnxk_dpi_conf { union dpi_instr_hdr_s hdr; struct cnxk_dpi_cdesc_data_s c_desc; + uint16_t pnum_words; + uint16_t pending; + uint16_t desc_idx; + uint16_t pad0; + struct rte_dma_stats stats; }; struct cnxk_dpi_vf_s { struct roc_dpi rdpi; struct cnxk_dpi_conf conf[MAX_VCHANS_PER_QUEUE]; - struct rte_dma_stats stats; - uint16_t pending; - uint16_t pnum_words; - uint16_t desc_idx; + uint16_t num_vchans; uint16_t flag; -}; +} __plt_cache_aligned; #endif From patchwork Mon Aug 21 17:49:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130619 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E2C6430C3; Mon, 21 Aug 2023 19:50:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6492343067; Mon, 21 Aug 2023 19:50:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C8C7842FB2 for ; Mon, 21 Aug 2023 19:50:26 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LDE6Zv004948 for ; Mon, 21 Aug 2023 10:50:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=2RzzJlSjwG/Rd3vsnqJgyTahbwLwqPF/guc4jA7nppM=; b=SDL45ica/uZPPiU2k8D4DuXV+OyMgI/OyRc1VhjMU1AoHztN0ZCN6sxUJLHjJhaXBKPM dECKPrMNtt228HPOYccWyB7HApodWRj0FBColvAbo4Z62vN3AMXEbSJq4KJYXP+peWKm GikQG2vW2/i5gMNNDTmLKinHB18Vo2knfAdhqDHITlYL/+Wm550oucZuR4rFvJxk1WZi Tay87SBNu5wVrlu58SmW1H/Jb9yPChOZggmY1SsxLlLdcGniJ1d/Z9CDEyqna4Y6jrPn N91qPUlVAhUDyphkjTUPxdWt/IrdfeOU4Keq7HfXE2z6Dm/Lf6yfjnR/VG7mnooLGnro Mg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sju3qp3ax-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 21 Aug 2023 10:50:25 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:50:11 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:50:11 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 32B683F7092; Mon, 21 Aug 2023 10:50:09 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , Subject: [PATCH v4 7/8] dma/cnxk: add completion ring tail wrap check Date: Mon, 21 Aug 2023 23:19:41 +0530 Message-ID: <20230821174942.3165191-7-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: xrZYA3qSWrKUDWwPr2W83LrJ_lvNFRmH X-Proofpoint-ORIG-GUID: xrZYA3qSWrKUDWwPr2W83LrJ_lvNFRmH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vamsi Attunuru Adds a check to avoid tail wrap when completion desc ring is full. Also patch increase max desc size to 2048. Signed-off-by: Vamsi Attunuru --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 22 ++++++++++++++++++++-- drivers/dma/cnxk/cnxk_dmadev.h | 2 +- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 9fb3bb264a..288606bb3d 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -434,6 +434,11 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d header->cn9k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -494,6 +499,11 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge header->cn9k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + /* * For inbound case, src pointers are last pointers. * For all other cases, src pointers are first pointers. @@ -561,6 +571,11 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t header->cn10k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -613,6 +628,11 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge header->cn10k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + header->cn10k.nfst = nb_src & DPI_MAX_POINTER; header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; fptr = &src[0]; @@ -695,8 +715,6 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(last_idx); - for (cnt = 0; cnt < nb_cpls; cnt++) { comp_ptr = c_desc->compl_ptr[c_desc->head]; status[cnt] = comp_ptr->cdata; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index f375143b16..9c6c898d23 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -9,7 +9,7 @@ #define DPI_MAX_POINTER 15 #define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) #define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) -#define DPI_MAX_DESC 1024 +#define DPI_MAX_DESC 2048 #define DPI_MIN_DESC 2 #define MAX_VCHANS_PER_QUEUE 4 From patchwork Mon Aug 21 17:49:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130620 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A22C1430C3; Mon, 21 Aug 2023 19:50:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8EB5C43265; Mon, 21 Aug 2023 19:50:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 5DB9242FB2 for ; Mon, 21 Aug 2023 19:50:27 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37LDE6Zw004948 for ; Mon, 21 Aug 2023 10:50:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ZPxp61qeHy20gkyj3QaOtv/1k2Iz/bGTgWKXwuwyWVk=; b=kC9MHurZ/F/9hUrElJVngzRbrnDvh/EpGUdc41IjHxkkgobRnsktyooOIRoiRBjWBzF1 JlrvqdQFQiK4DbYQxHFRhD2bbV6BmPeZ6ESsCjYVVK7B0aEEhpa054JzkNNzG66H2gpq uTaTb7SFGWxLN2lrLAyTIB4h/3QF2XRViwNwVloPMH11ZNBsihPpf4FTqlTMYVPBstVw h/WHSl3s7y0YdmYlDMQh7woEhfEaftfx2q+OwE8MHWDpwtJY5fcB81Ihz7cBWCQdBvpx g4s/rp6pLliUmrENeuVdcfq202itm0wiVjbYjPoVVjYA2BylCCjxga+3pe3VZkHJ4zc6 mQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sju3qp3ax-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 21 Aug 2023 10:50:26 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 21 Aug 2023 10:50:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 21 Aug 2023 10:50:14 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 4B9013F709B; Mon, 21 Aug 2023 10:50:13 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , Subject: [PATCH v4 8/8] dma/cnxk: track last index return value Date: Mon, 21 Aug 2023 23:19:42 +0530 Message-ID: <20230821174942.3165191-8-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230818090159.2597468-1-amitprakashs@marvell.com> <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: UC6MEt4fhDvtFfMg6-jqgpHiU0q9lRMG X-Proofpoint-ORIG-GUID: UC6MEt4fhDvtFfMg6-jqgpHiU0q9lRMG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-21_06,2023-08-18_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vamsi Attunuru last index value might lost the order when dma stats are reset in between copy operations. Patch adds a variable to track the completed count, that can be used to compute the last index, also patch adds misc other changes. Signed-off-by: Vamsi Attunuru --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. drivers/dma/cnxk/cnxk_dmadev.c | 17 ++++++++++------- drivers/dma/cnxk/cnxk_dmadev.h | 1 + 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 288606bb3d..7e728b943b 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -298,6 +298,7 @@ cnxk_dmadev_start(struct rte_dma_dev *dev) } cnxk_stats_reset(dev, i); + dpi_conf->completed_offset = 0; } roc_dpi_enable(&dpivf->rdpi); @@ -479,7 +480,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d dpi_conf->pending++; } - return (dpi_conf->desc_idx++); + return dpi_conf->desc_idx++; } static int @@ -545,13 +546,13 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpi_conf->stats.submitted += nb_src; + dpi_conf->stats.submitted++; } else { dpi_conf->pnum_words += num_words; dpi_conf->pending++; } - return (dpi_conf->desc_idx++); + return dpi_conf->desc_idx++; } static int @@ -664,13 +665,13 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpi_conf->stats.submitted += nb_src; + dpi_conf->stats.submitted++; } else { dpi_conf->pnum_words += num_words; dpi_conf->pending++; } - return (dpi_conf->desc_idx++); + return dpi_conf->desc_idx++; } static uint16_t @@ -700,7 +701,7 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, } dpi_conf->stats.completed += cnt; - *last_idx = dpi_conf->stats.completed - 1; + *last_idx = (dpi_conf->completed_offset + dpi_conf->stats.completed - 1) & 0xffff; return cnt; } @@ -729,7 +730,7 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n } dpi_conf->stats.completed += cnt; - *last_idx = dpi_conf->stats.completed - 1; + *last_idx = (dpi_conf->completed_offset + dpi_conf->stats.completed - 1) & 0xffff; return cnt; } @@ -814,6 +815,7 @@ cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) if (vchan == RTE_DMA_ALL_VCHAN) { for (i = 0; i < dpivf->num_vchans; i++) { dpi_conf = &dpivf->conf[i]; + dpi_conf->completed_offset += dpi_conf->stats.completed; dpi_conf->stats = (struct rte_dma_stats){0}; } @@ -824,6 +826,7 @@ cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) return -EINVAL; dpi_conf = &dpivf->conf[vchan]; + dpi_conf->completed_offset += dpi_conf->stats.completed; dpi_conf->stats = (struct rte_dma_stats){0}; return 0; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 9c6c898d23..254e7fea20 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -41,6 +41,7 @@ struct cnxk_dpi_conf { uint16_t desc_idx; uint16_t pad0; struct rte_dma_stats stats; + uint64_t completed_offset; }; struct cnxk_dpi_vf_s {