From patchwork Wed Aug 23 11:15:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130677 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3AC94430DF; Wed, 23 Aug 2023 13:15:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 10378410FB; Wed, 23 Aug 2023 13:15:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 90BE4410F1; Wed, 23 Aug 2023 13:15:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N7AMpP012902; Wed, 23 Aug 2023 04:15:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=CkZmp6R54RWsq+YGGdK2ynHyavscanJmo38w+pMRaAE=; b=eHKV1tny2E1FzeNnFl/c6KtVxlTLL8wP+962wDvcU55iLYeNWHEMtt31GkpGBrYX02jo 06opuNUJT0P44Awj5or8wV9OGtbzlZweiiIUSJkDexoBH6lm8SYWJGj01DjyXfpgQ7Og o/xvf7LtqQoXTLKGWzxqPmd4bAdSp3MTOjrUIYdVRMduSZDbk2ODlgPae1V81pDIxPs8 JI1s73YXU/20PsxFc/I/8E43ypLD4FtMrXvmtKjXQHAeCd6xa12OwyP9uUgDeRKqjWSG QGRlAw4b7SSNEaGnPtTmXWCN9Iv0yR2BkC7/+gFWswqt6YQIanWBQMc3WpW67+aEkthY +g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmcm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Aug 2023 04:15:41 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:15:39 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:15:39 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 8F4DB3F708A; Wed, 23 Aug 2023 04:15:36 -0700 (PDT) From: Amit Prakash Shukla To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , Amit Prakash Shukla , , Radha Mohan Chintakuntla Subject: [PATCH v5 01/12] common/cnxk: use unique name for DPI memzone Date: Wed, 23 Aug 2023 16:45:14 +0530 Message-ID: <20230823111525.3975662-1-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230821174942.3165191-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: ETDxJzFbYd_bIiOW0RiPDAEc8M5mTjm0 X-Proofpoint-ORIG-GUID: ETDxJzFbYd_bIiOW0RiPDAEc8M5mTjm0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org roc_dpi was using vfid as part of name for memzone allocation. This led to memzone allocation failure in case of multiple physical functions. vfid is not unique by itself since multiple physical functions can have the same virtual function indices. So use complete DBDF as part of memzone name to make it unique. Fixes: b6e395692b6d ("common/cnxk: add DPI DMA support") Cc: stable@dpdk.org Signed-off-by: Radha Mohan Chintakuntla Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/common/cnxk/roc_dpi.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/common/cnxk/roc_dpi.c b/drivers/common/cnxk/roc_dpi.c index 93c8318a3d..0e2f803077 100644 --- a/drivers/common/cnxk/roc_dpi.c +++ b/drivers/common/cnxk/roc_dpi.c @@ -81,10 +81,10 @@ roc_dpi_configure(struct roc_dpi *roc_dpi) return rc; } - snprintf(name, sizeof(name), "dpimem%d", roc_dpi->vfid); + snprintf(name, sizeof(name), "dpimem%d:%d:%d:%d", pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); buflen = DPI_CMD_QUEUE_SIZE * DPI_CMD_QUEUE_BUFS; - dpi_mz = plt_memzone_reserve_aligned(name, buflen, 0, - DPI_CMD_QUEUE_SIZE); + dpi_mz = plt_memzone_reserve_aligned(name, buflen, 0, DPI_CMD_QUEUE_SIZE); if (dpi_mz == NULL) { plt_err("dpi memzone reserve failed"); rc = -ENOMEM; From patchwork Wed Aug 23 11:15:15 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130678 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6DC77430DF; Wed, 23 Aug 2023 13:16:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E368411F3; Wed, 23 Aug 2023 13:16:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 46688410F1 for ; Wed, 23 Aug 2023 13:16:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N7AM79013497 for ; Wed, 23 Aug 2023 04:15:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=QyHdgr4c37f/q+zRdx/Nam5tUqBwfyQ1RqE+2EVOihw=; b=P04zUzyipngY5WSCtUNotElGEbuSOwCce0d4Vf2yDfxBAL3drhwwEaOh8epM1nq3n0YS p/ttv7kkiLnPE86ySpVJFWtudLuLeHn57e4pou1O6sAdhtlt6prpc4iRW9PWqqx2bKas uCpvHLL7eH7OAErH95ZVNbU4Q6VREsd7oaNiqPw/2b/EECI80dk6SLCRy43elira4FL4 AaThmJz+0bH0+kPoMwlkYMiojF7O2QWnJ+4I4O77DVojp34JMuHDjrRMCyMG5GLQVh1N kN43ZfZGxRQ3eV/eun6Q8PQVho288BRDYpjD3Ua0MSNST3yZna+U0QYzF+PE6J/Y7qUK mQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmd2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 23 Aug 2023 04:15:59 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:15:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:15:57 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id B357C3F708A; Wed, 23 Aug 2023 04:15:55 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla Subject: [PATCH v5 02/12] dma/cnxk: support for burst capacity Date: Wed, 23 Aug 2023 16:45:15 +0530 Message-ID: <20230823111525.3975662-2-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: QjWgbGiMD7Pzy2kXelLYNHR6d9qAbSsG X-Proofpoint-ORIG-GUID: QjWgbGiMD7Pzy2kXelLYNHR6d9qAbSsG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds support for the burst capacity. Call to the function return number of vacant space in descriptor ring for the current burst. Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 125 ++++++++++++++++++++++----------- drivers/dma/cnxk/cnxk_dmadev.h | 6 +- 2 files changed, 87 insertions(+), 44 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index a6f4a31e0e..f06c979b9c 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -108,6 +108,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; dpivf->conf.c_desc.head = 0; dpivf->conf.c_desc.tail = 0; + dpivf->pending = 0; return 0; } @@ -164,6 +165,7 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; dpivf->conf.c_desc.head = 0; dpivf->conf.c_desc.tail = 0; + dpivf->pending = 0; return 0; } @@ -174,7 +176,8 @@ cnxk_dmadev_start(struct rte_dma_dev *dev) struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; dpivf->desc_idx = 0; - dpivf->num_words = 0; + dpivf->pending = 0; + dpivf->pnum_words = 0; roc_dpi_enable(&dpivf->rdpi); return 0; @@ -294,7 +297,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; comp_ptr->cdata = DPI_REQ_CDATA; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -322,17 +325,21 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, dpivf->cmd[num_words++] = lptr; rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; - } - dpivf->num_words += num_words; + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; } - return dpivf->desc_idx++; + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted++; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; + } + + return (dpivf->desc_idx++); } static int @@ -353,7 +360,7 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; comp_ptr->cdata = DPI_REQ_CDATA; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); /* * For inbound case, src pointers are last pointers. @@ -388,17 +395,21 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, } rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; - } - dpivf->num_words += num_words; + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; } - return (rc < 0) ? rc : dpivf->desc_idx++; + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted += nb_src; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; + } + + return (dpivf->desc_idx++); } static int @@ -417,7 +428,7 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; comp_ptr->cdata = DPI_REQ_CDATA; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -436,14 +447,18 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, dpivf->cmd[num_words++] = lptr; rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; - } - dpivf->num_words += num_words; + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; + } + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted++; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; } return dpivf->desc_idx++; @@ -467,7 +482,7 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; comp_ptr->cdata = DPI_REQ_CDATA; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc); + STRM_INC(dpivf->conf.c_desc, tail); header->cn10k.nfst = nb_src & 0xf; header->cn10k.nlst = nb_dst & 0xf; @@ -492,17 +507,21 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, } rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); - if (!rc) { - if (flags & RTE_DMA_OP_FLAG_SUBMIT) { - rte_wmb(); - plt_write64(num_words, - dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; - } - dpivf->num_words += num_words; + if (unlikely(rc)) { + STRM_DEC(dpivf->conf.c_desc, tail); + return rc; + } + + if (flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpivf->stats.submitted += nb_src; + } else { + dpivf->pnum_words += num_words; + dpivf->pending++; } - return (rc < 0) ? rc : dpivf->desc_idx++; + return (dpivf->desc_idx++); } static uint16_t @@ -566,14 +585,35 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, return cnt; } +static uint16_t +cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) +{ + const struct cnxk_dpi_vf_s *dpivf = (const struct cnxk_dpi_vf_s *)dev_private; + uint16_t burst_cap; + + RTE_SET_USED(vchan); + + burst_cap = dpivf->conf.c_desc.max_cnt - + ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; + + return burst_cap; +} + static int cnxk_dmadev_submit(void *dev_private, uint16_t vchan __rte_unused) { struct cnxk_dpi_vf_s *dpivf = dev_private; + uint32_t num_words = dpivf->pnum_words; + + if (!dpivf->pnum_words) + return 0; rte_wmb(); - plt_write64(dpivf->num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; + plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); + + dpivf->stats.submitted += dpivf->pending; + dpivf->pnum_words = 0; + dpivf->pending = 0; return 0; } @@ -666,6 +706,7 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, dmadev->fp_obj->submit = cnxk_dmadev_submit; dmadev->fp_obj->completed = cnxk_dmadev_completed; dmadev->fp_obj->completed_status = cnxk_dmadev_completed_status; + dmadev->fp_obj->burst_capacity = cnxk_damdev_burst_capacity; if (pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KA || pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CNF10KA || diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index e1f5694f50..943e9e3013 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -7,7 +7,8 @@ #define DPI_MAX_POINTER 15 #define DPI_QUEUE_STOP 0x0 #define DPI_QUEUE_START 0x1 -#define STRM_INC(s) ((s).tail = ((s).tail + 1) % (s).max_cnt) +#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) +#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) #define DPI_MAX_DESC 1024 /* Set Completion data to 0xFF when request submitted, @@ -37,7 +38,8 @@ struct cnxk_dpi_vf_s { struct cnxk_dpi_conf conf; struct rte_dma_stats stats; uint64_t cmd[DPI_MAX_CMD_SIZE]; - uint32_t num_words; + uint16_t pending; + uint16_t pnum_words; uint16_t desc_idx; }; From patchwork Wed Aug 23 11:15:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130679 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BD7F430DF; Wed, 23 Aug 2023 13:16:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D3FC34324E; Wed, 23 Aug 2023 13:16:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 6C7F9410F1; Wed, 23 Aug 2023 13:16:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N79aov012862; Wed, 23 Aug 2023 04:16:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=HMn170sHGqONPGV/xecXrbJ9pAflDA5ifN+FuhGg5a8=; b=c2q80buZS0YgjL6imQuVoX2/P91ZTGnFvFPO046N6T74iP6mERFXnlVLuuwLHnlry61/ JWBtV9AD9i6hYymSVkQrTbpDfKSEM117UYB0G2+un952VqulhEcmLxuFyrq5T28pEW8J V1ObmJsbg4BF9O7GIrkU8h1Lh/1/trMoIYkl/wWRKShZN070Nq5obbTI4LvR00CqO5qg lDtJCGwPVWhg2gQhInrJBR07qSkVt1ph9GXWIPd0hd850+nY12dZk1Fk/KZP4RF1FfC8 Gldp1L206NsTzEsEZJzHqvs3RYoo/kppYIlkfUeKvj6mcUZU2XWVRqwD1hJiQcLqiDUm iw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmdf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Aug 2023 04:16:04 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:02 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:02 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 869EA3F708A; Wed, 23 Aug 2023 04:16:00 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Subject: [PATCH v5 03/12] dma/cnxk: set dmadev to ready state Date: Wed, 23 Aug 2023 16:45:16 +0530 Message-ID: <20230823111525.3975662-3-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: Zr0Ic6K407B8oTzkTeOLsJG-J7FAhztw X-Proofpoint-ORIG-GUID: Zr0Ic6K407B8oTzkTeOLsJG-J7FAhztw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When a device is not set to a ready state, on exiting the application proper cleanup is not done. This causes the application to fail on trying to run next time. Setting the device to ready state on successful probe fixes the issue. Fixes: 53f6d7328bf4 ("dma/cnxk: create and initialize device on PCI probing") Cc: stable@dpdk.org Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index f06c979b9c..d8bd61a048 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -668,8 +668,7 @@ static const struct rte_dma_dev_ops cnxk_dmadev_ops = { }; static int -cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) +cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { struct cnxk_dpi_vf_s *dpivf = NULL; char name[RTE_DEV_NAME_MAX_LEN]; @@ -688,8 +687,7 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, memset(name, 0, sizeof(name)); rte_pci_device_name(&pci_dev->addr, name, sizeof(name)); - dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, - sizeof(*dpivf)); + dmadev = rte_dma_pmd_allocate(name, pci_dev->device.numa_node, sizeof(*dpivf)); if (dmadev == NULL) { plt_err("dma device allocation failed for %s", name); return -ENOMEM; @@ -723,6 +721,8 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, if (rc < 0) goto err_out_free; + dmadev->state = RTE_DMA_DEV_READY; + return 0; err_out_free: From patchwork Wed Aug 23 11:15:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130680 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC005430DF; Wed, 23 Aug 2023 13:16:15 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 62DCD4325B; Wed, 23 Aug 2023 13:16:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D3E2542B8B; Wed, 23 Aug 2023 13:16:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N72aRv013047; Wed, 23 Aug 2023 04:16:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=+++tQ1XlNrIvQqt8lXLqub1p4n8McQ0FQa7YGYcw2GM=; b=XhXEqxWOUiuhmbF63iRXobgUcoo46bEaC9Gpq1Z7TF77AzC/SqXi2Nk4jZRdczm6MCAa MWTzjXDql9Wz6QCEjP6hjkFcfi0K7ved0IFJLR7tqyJPM3CvB9hciQGTE0DfdxECSMc9 BO+bcNFOQOhQUlXPba/ZUQ0XoFZ4DaYioneFoOMA3qz7HhDBLBhKjtj8FNbMOLejEuAY UBrWKQgqf9AcN1S3fSnpwZEr8up2xnCq6YECOULSnbL+4GjBbq34wHSMRjEK5OW1b9G2 I7ygcs48k5ZsVvAo+otRMvkX6NaISbj/tQwTCm0LCWG70cPeCGzC1WKtPznA2QrJ1DcE JA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmds-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Aug 2023 04:16:10 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:08 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:08 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 2A35D3F708A; Wed, 23 Aug 2023 04:16:05 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Subject: [PATCH v5 04/12] dma/cnxk: flag support for dma device Date: Wed, 23 Aug 2023 16:45:17 +0530 Message-ID: <20230823111525.3975662-4-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 9-HJhB22noZS5mfUqg6_hlokUVbxH77K X-Proofpoint-ORIG-GUID: 9-HJhB22noZS5mfUqg6_hlokUVbxH77K X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Multiple call to configure, setup queues without stopping the device would leak the ring descriptor and hardware queue memory. This patch adds flags support to prevent configuring without stopping the device. Fixes: b56f1e2dad38 ("dma/cnxk: add channel operations") Cc: stable@dpdk.org Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 32 +++++++++++++++++++++++++++++--- drivers/dma/cnxk/cnxk_dmadev.h | 5 +++++ 2 files changed, 34 insertions(+), 3 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index d8bd61a048..a7279fbd3a 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -45,14 +45,22 @@ cnxk_dmadev_configure(struct rte_dma_dev *dev, int rc = 0; RTE_SET_USED(conf); - RTE_SET_USED(conf); - RTE_SET_USED(conf_sz); RTE_SET_USED(conf_sz); + dpivf = dev->fp_obj->dev_private; + + if (dpivf->flag & CNXK_DPI_DEV_CONFIG) + return rc; + rc = roc_dpi_configure(&dpivf->rdpi); - if (rc < 0) + if (rc < 0) { plt_err("DMA configure failed err = %d", rc); + goto done; + } + dpivf->flag |= CNXK_DPI_DEV_CONFIG; + +done: return rc; } @@ -69,6 +77,9 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); + if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + return 0; + header->cn9k.pt = DPI_HDR_PT_ZBW_CA; switch (conf->direction) { @@ -109,6 +120,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, dpivf->conf.c_desc.head = 0; dpivf->conf.c_desc.tail = 0; dpivf->pending = 0; + dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -126,6 +138,10 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); + + if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + return 0; + header->cn10k.pt = DPI_HDR_PT_ZBW_CA; switch (conf->direction) { @@ -166,6 +182,7 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, dpivf->conf.c_desc.head = 0; dpivf->conf.c_desc.tail = 0; dpivf->pending = 0; + dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -175,11 +192,16 @@ cnxk_dmadev_start(struct rte_dma_dev *dev) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + if (dpivf->flag & CNXK_DPI_DEV_START) + return 0; + dpivf->desc_idx = 0; dpivf->pending = 0; dpivf->pnum_words = 0; roc_dpi_enable(&dpivf->rdpi); + dpivf->flag |= CNXK_DPI_DEV_START; + return 0; } @@ -190,6 +212,8 @@ cnxk_dmadev_stop(struct rte_dma_dev *dev) roc_dpi_disable(&dpivf->rdpi); + dpivf->flag &= ~CNXK_DPI_DEV_START; + return 0; } @@ -201,6 +225,8 @@ cnxk_dmadev_close(struct rte_dma_dev *dev) roc_dpi_disable(&dpivf->rdpi); roc_dpi_dev_fini(&dpivf->rdpi); + dpivf->flag = 0; + return 0; } diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 943e9e3013..573bcff165 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -16,6 +16,10 @@ */ #define DPI_REQ_CDATA 0xFF +#define CNXK_DPI_DEV_CONFIG (1ULL << 0) +#define CNXK_DPI_VCHAN_CONFIG (1ULL << 1) +#define CNXK_DPI_DEV_START (1ULL << 2) + struct cnxk_dpi_compl_s { uint64_t cdata; void *cb_data; @@ -41,6 +45,7 @@ struct cnxk_dpi_vf_s { uint16_t pending; uint16_t pnum_words; uint16_t desc_idx; + uint16_t flag; }; #endif From patchwork Wed Aug 23 11:15:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130681 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E6C3430DF; Wed, 23 Aug 2023 13:16:22 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8F67A43257; Wed, 23 Aug 2023 13:16:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 486334161A; Wed, 23 Aug 2023 13:16:15 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N7AkEb012861; Wed, 23 Aug 2023 04:16:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=jljhvMkJfEqTgsJlmMH5aY6+6t7oMG6eI1e7FGfO7oQ=; b=S8Bwm6nQtiV0/8HCepHwH7PHjU1YXA5rj6UEforPwfQF2ypH3ydGTqmZWi9lRf3+pWSF M757cHpi3MByNcM5YzWNcYgtMwuoPB2DCH3CS1qh3yXwN+4dUojgscS13haCb4ZO2+Ua pp8wcB7e+Q9feipx63epwrmqpCBEWIQM40OnbFTSILd59mP8/JC0gpLtWK7dKtH3r+2O hPf3iKDU6uu8ISwsW+H+7KAUNxZPYw1J1UVi/Xp+0XDRdbXeL5wb6hk+myDbwTlY/vdO Bk/ofUYdUP4vUWnv3tKrKLkrUgDiJj5sV2r4z60Ia7/6K67c4z73xwJBYDm4imVou/h3 8w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctme4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Aug 2023 04:16:14 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:11 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 3A8803F708A; Wed, 23 Aug 2023 04:16:09 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Subject: [PATCH v5 05/12] dma/cnxk: allocate completion ring buffer Date: Wed, 23 Aug 2023 16:45:18 +0530 Message-ID: <20230823111525.3975662-5-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: gb_P9rVuZupNpNL7DJ95k9wcvXJvifLX X-Proofpoint-ORIG-GUID: gb_P9rVuZupNpNL7DJ95k9wcvXJvifLX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Completion buffer was a static array per dma device. This would consume memory for max number of descriptor supported by device which might be more than configured by application. The patchset allocates the memory for completion buffer based on the number of descriptor configured by application. Fixes: b56f1e2dad38 ("dma/cnxk: add channel operations") Cc: stable@dpdk.org Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 265 ++++++++++++++++++--------------- drivers/dma/cnxk/cnxk_dmadev.h | 13 +- 2 files changed, 148 insertions(+), 130 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index a7279fbd3a..0db74b454d 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -7,39 +7,35 @@ #include #include +#include +#include #include #include #include #include -#include -#include -#include #include static int -cnxk_dmadev_info_get(const struct rte_dma_dev *dev, - struct rte_dma_info *dev_info, uint32_t size) +cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size) { RTE_SET_USED(dev); RTE_SET_USED(size); dev_info->max_vchans = 1; dev_info->nb_vchans = 1; - dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | - RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | - RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | - RTE_DMA_CAPA_OPS_COPY_SG; + dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | + RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | + RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; dev_info->max_desc = DPI_MAX_DESC; - dev_info->min_desc = 1; + dev_info->min_desc = 2; dev_info->max_sges = DPI_MAX_POINTER; return 0; } static int -cnxk_dmadev_configure(struct rte_dma_dev *dev, - const struct rte_dma_conf *conf, uint32_t conf_sz) +cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = NULL; int rc = 0; @@ -66,12 +62,13 @@ cnxk_dmadev_configure(struct rte_dma_dev *dev, static int cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, - const struct rte_dma_vchan_conf *conf, - uint32_t conf_sz) + const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_compl_s *comp_data; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; + uint16_t max_desc; + uint32_t size; int i; RTE_SET_USED(vchan); @@ -107,18 +104,30 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.fport = conf->dst_port.pcie.coreid; }; - for (i = 0; i < conf->nb_desc; i++) { - comp_data = rte_zmalloc(NULL, sizeof(*comp_data), 0); - if (comp_data == NULL) { - plt_err("Failed to allocate for comp_data"); - return -ENOMEM; - } - comp_data->cdata = DPI_REQ_CDATA; - dpivf->conf.c_desc.compl_ptr[i] = comp_data; - }; - dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; - dpivf->conf.c_desc.head = 0; - dpivf->conf.c_desc.tail = 0; + max_desc = conf->nb_desc; + if (!rte_is_power_of_2(max_desc)) + max_desc = rte_align32pow2(max_desc); + + if (max_desc > DPI_MAX_DESC) + max_desc = DPI_MAX_DESC; + + size = (max_desc * sizeof(struct cnxk_dpi_compl_s *)); + dpi_conf->c_desc.compl_ptr = rte_zmalloc(NULL, size, 0); + + if (dpi_conf->c_desc.compl_ptr == NULL) { + plt_err("Failed to allocate for comp_data"); + return -ENOMEM; + } + + for (i = 0; i < max_desc; i++) { + dpi_conf->c_desc.compl_ptr[i] = + rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; + } + + dpi_conf->c_desc.max_cnt = (max_desc - 1); + dpi_conf->c_desc.head = 0; + dpi_conf->c_desc.tail = 0; dpivf->pending = 0; dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; @@ -127,12 +136,13 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, static int cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, - const struct rte_dma_vchan_conf *conf, - uint32_t conf_sz) + const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_compl_s *comp_data; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; + uint16_t max_desc; + uint32_t size; int i; RTE_SET_USED(vchan); @@ -169,18 +179,30 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.fport = conf->dst_port.pcie.coreid; }; - for (i = 0; i < conf->nb_desc; i++) { - comp_data = rte_zmalloc(NULL, sizeof(*comp_data), 0); - if (comp_data == NULL) { - plt_err("Failed to allocate for comp_data"); - return -ENOMEM; - } - comp_data->cdata = DPI_REQ_CDATA; - dpivf->conf.c_desc.compl_ptr[i] = comp_data; - }; - dpivf->conf.c_desc.max_cnt = DPI_MAX_DESC; - dpivf->conf.c_desc.head = 0; - dpivf->conf.c_desc.tail = 0; + max_desc = conf->nb_desc; + if (!rte_is_power_of_2(max_desc)) + max_desc = rte_align32pow2(max_desc); + + if (max_desc > DPI_MAX_DESC) + max_desc = DPI_MAX_DESC; + + size = (max_desc * sizeof(struct cnxk_dpi_compl_s *)); + dpi_conf->c_desc.compl_ptr = rte_zmalloc(NULL, size, 0); + + if (dpi_conf->c_desc.compl_ptr == NULL) { + plt_err("Failed to allocate for comp_data"); + return -ENOMEM; + } + + for (i = 0; i < max_desc; i++) { + dpi_conf->c_desc.compl_ptr[i] = + rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; + } + + dpi_conf->c_desc.max_cnt = (max_desc - 1); + dpi_conf->c_desc.head = 0; + dpi_conf->c_desc.tail = 0; dpivf->pending = 0; dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; @@ -308,12 +330,13 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) } static int -cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, - rte_iova_t dst, uint32_t length, uint64_t flags) +cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, uint32_t length, + uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; @@ -321,7 +344,6 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn9k.ptr = (uint64_t)comp_ptr; STRM_INC(dpivf->conf.c_desc, tail); @@ -340,17 +362,17 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, lptr = dst; } - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; /* word3 is always 0 */ num_words += 4; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = fptr; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = lptr; + cmd[num_words++] = length; + cmd[num_words++] = fptr; + cmd[num_words++] = length; + cmd[num_words++] = lptr; - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpivf->conf.c_desc, tail); return rc; @@ -369,22 +391,20 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, } static int -cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, - const struct rte_dma_sge *src, - const struct rte_dma_sge *dst, - uint16_t nb_src, uint16_t nb_dst, uint64_t flags) +cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src, + const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn9k.ptr = (uint64_t)comp_ptr; STRM_INC(dpivf->conf.c_desc, tail); @@ -393,34 +413,34 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, * For all other cases, src pointers are first pointers. */ if (header->cn9k.xtype == DPI_XTYPE_INBOUND) { - header->cn9k.nfst = nb_dst & 0xf; - header->cn9k.nlst = nb_src & 0xf; + header->cn9k.nfst = nb_dst & DPI_MAX_POINTER; + header->cn9k.nlst = nb_src & DPI_MAX_POINTER; fptr = &dst[0]; lptr = &src[0]; } else { - header->cn9k.nfst = nb_src & 0xf; - header->cn9k.nlst = nb_dst & 0xf; + header->cn9k.nfst = nb_src & DPI_MAX_POINTER; + header->cn9k.nlst = nb_dst & DPI_MAX_POINTER; fptr = &src[0]; lptr = &dst[0]; } - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; num_words += 4; for (i = 0; i < header->cn9k.nfst; i++) { - dpivf->cmd[num_words++] = (uint64_t)fptr->length; - dpivf->cmd[num_words++] = fptr->addr; + cmd[num_words++] = (uint64_t)fptr->length; + cmd[num_words++] = fptr->addr; fptr++; } for (i = 0; i < header->cn9k.nlst; i++) { - dpivf->cmd[num_words++] = (uint64_t)lptr->length; - dpivf->cmd[num_words++] = lptr->addr; + cmd[num_words++] = (uint64_t)lptr->length; + cmd[num_words++] = lptr->addr; lptr++; } - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpivf->conf.c_desc, tail); return rc; @@ -439,12 +459,13 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, } static int -cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, - rte_iova_t dst, uint32_t length, uint64_t flags) +cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; @@ -452,7 +473,6 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn10k.ptr = (uint64_t)comp_ptr; STRM_INC(dpivf->conf.c_desc, tail); @@ -462,17 +482,17 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, fptr = src; lptr = dst; - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; /* word3 is always 0 */ num_words += 4; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = fptr; - dpivf->cmd[num_words++] = length; - dpivf->cmd[num_words++] = lptr; + cmd[num_words++] = length; + cmd[num_words++] = fptr; + cmd[num_words++] = length; + cmd[num_words++] = lptr; - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpivf->conf.c_desc, tail); return rc; @@ -491,48 +511,47 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, } static int -cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, - const struct rte_dma_sge *src, - const struct rte_dma_sge *dst, uint16_t nb_src, - uint16_t nb_dst, uint64_t flags) +cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge *src, + const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, + uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; union dpi_instr_hdr_s *header = &dpivf->conf.hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; + uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; RTE_SET_USED(vchan); comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; - comp_ptr->cdata = DPI_REQ_CDATA; header->cn10k.ptr = (uint64_t)comp_ptr; STRM_INC(dpivf->conf.c_desc, tail); - header->cn10k.nfst = nb_src & 0xf; - header->cn10k.nlst = nb_dst & 0xf; + header->cn10k.nfst = nb_src & DPI_MAX_POINTER; + header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; fptr = &src[0]; lptr = &dst[0]; - dpivf->cmd[0] = header->u[0]; - dpivf->cmd[1] = header->u[1]; - dpivf->cmd[2] = header->u[2]; + cmd[0] = header->u[0]; + cmd[1] = header->u[1]; + cmd[2] = header->u[2]; num_words += 4; for (i = 0; i < header->cn10k.nfst; i++) { - dpivf->cmd[num_words++] = (uint64_t)fptr->length; - dpivf->cmd[num_words++] = fptr->addr; + cmd[num_words++] = (uint64_t)fptr->length; + cmd[num_words++] = fptr->addr; fptr++; } for (i = 0; i < header->cn10k.nlst; i++) { - dpivf->cmd[num_words++] = (uint64_t)lptr->length; - dpivf->cmd[num_words++] = lptr->addr; + cmd[num_words++] = (uint64_t)lptr->length; + cmd[num_words++] = lptr->addr; lptr++; } - rc = __dpi_queue_write(&dpivf->rdpi, dpivf->cmd, num_words); + rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { STRM_DEC(dpivf->conf.c_desc, tail); return rc; @@ -551,50 +570,52 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, } static uint16_t -cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, - uint16_t *last_idx, bool *has_error) +cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, uint16_t *last_idx, + bool *has_error) { struct cnxk_dpi_vf_s *dpivf = dev_private; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_compl_s *comp_ptr; int cnt; RTE_SET_USED(vchan); - if (dpivf->stats.submitted == dpivf->stats.completed) - return 0; - for (cnt = 0; cnt < nb_cpls; cnt++) { - struct cnxk_dpi_compl_s *comp_ptr = - dpivf->conf.c_desc.compl_ptr[cnt]; + comp_ptr = c_desc->compl_ptr[c_desc->head]; if (comp_ptr->cdata) { if (comp_ptr->cdata == DPI_REQ_CDATA) break; *has_error = 1; dpivf->stats.errors++; + STRM_INC(*c_desc, head); break; } + + comp_ptr->cdata = DPI_REQ_CDATA; + STRM_INC(*c_desc, head); } - *last_idx = cnt - 1; - dpivf->conf.c_desc.tail = cnt; dpivf->stats.completed += cnt; + *last_idx = dpivf->stats.completed - 1; return cnt; } static uint16_t -cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, - const uint16_t nb_cpls, uint16_t *last_idx, - enum rte_dma_status_code *status) +cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, enum rte_dma_status_code *status) { struct cnxk_dpi_vf_s *dpivf = dev_private; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_compl_s *comp_ptr; int cnt; RTE_SET_USED(vchan); RTE_SET_USED(last_idx); + for (cnt = 0; cnt < nb_cpls; cnt++) { - struct cnxk_dpi_compl_s *comp_ptr = - dpivf->conf.c_desc.compl_ptr[cnt]; + comp_ptr = c_desc->compl_ptr[c_desc->head]; status[cnt] = comp_ptr->cdata; if (status[cnt]) { if (status[cnt] == DPI_REQ_CDATA) @@ -602,11 +623,12 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, dpivf->stats.errors++; } + comp_ptr->cdata = DPI_REQ_CDATA; + STRM_INC(*c_desc, head); } - *last_idx = cnt - 1; - dpivf->conf.c_desc.tail = 0; dpivf->stats.completed += cnt; + *last_idx = dpivf->stats.completed - 1; return cnt; } @@ -645,8 +667,8 @@ cnxk_dmadev_submit(void *dev_private, uint16_t vchan __rte_unused) } static int -cnxk_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, - struct rte_dma_stats *rte_stats, uint32_t size) +cnxk_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_stats *rte_stats, + uint32_t size) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; struct rte_dma_stats *stats = &dpivf->stats; @@ -770,20 +792,17 @@ cnxk_dmadev_remove(struct rte_pci_device *pci_dev) } static const struct rte_pci_id cnxk_dma_pci_map[] = { - { - RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, - PCI_DEVID_CNXK_DPI_VF) - }, + {RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_DPI_VF)}, { .vendor_id = 0, }, }; static struct rte_pci_driver cnxk_dmadev = { - .id_table = cnxk_dma_pci_map, + .id_table = cnxk_dma_pci_map, .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA, - .probe = cnxk_dmadev_probe, - .remove = cnxk_dmadev_remove, + .probe = cnxk_dmadev_probe, + .remove = cnxk_dmadev_remove, }; RTE_PMD_REGISTER_PCI(cnxk_dmadev_pci_driver, cnxk_dmadev); diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 573bcff165..9563295af0 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -4,17 +4,17 @@ #ifndef CNXK_DMADEV_H #define CNXK_DMADEV_H -#define DPI_MAX_POINTER 15 -#define DPI_QUEUE_STOP 0x0 -#define DPI_QUEUE_START 0x1 +#include + +#define DPI_MAX_POINTER 15 #define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) #define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) -#define DPI_MAX_DESC 1024 +#define DPI_MAX_DESC 1024 /* Set Completion data to 0xFF when request submitted, * upon successful request completion engine reset to completion status */ -#define DPI_REQ_CDATA 0xFF +#define DPI_REQ_CDATA 0xFF #define CNXK_DPI_DEV_CONFIG (1ULL << 0) #define CNXK_DPI_VCHAN_CONFIG (1ULL << 1) @@ -26,7 +26,7 @@ struct cnxk_dpi_compl_s { }; struct cnxk_dpi_cdesc_data_s { - struct cnxk_dpi_compl_s *compl_ptr[DPI_MAX_DESC]; + struct cnxk_dpi_compl_s **compl_ptr; uint16_t max_cnt; uint16_t head; uint16_t tail; @@ -41,7 +41,6 @@ struct cnxk_dpi_vf_s { struct roc_dpi rdpi; struct cnxk_dpi_conf conf; struct rte_dma_stats stats; - uint64_t cmd[DPI_MAX_CMD_SIZE]; uint16_t pending; uint16_t pnum_words; uint16_t desc_idx; From patchwork Wed Aug 23 11:15:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130682 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EFBF430DF; Wed, 23 Aug 2023 13:16:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7B4C343265; Wed, 23 Aug 2023 13:16:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0C6BA43261; Wed, 23 Aug 2023 13:16:18 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N72aS0013047; Wed, 23 Aug 2023 04:16:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zx0nQJm72CgnShJic74AfV96cR47OHNtx6x7KXTfJO8=; b=G8oes8F0uhkKEILhdft1OPwGPj4IyNgUgW9YOhD4BVNXi4aX91DtNkK50PrKNgMM5IiY uZ5gWyKqjpmZvTfnEMnP4nKtQSmrUjsY/a94IGSI2Tti+aWghl7MshXDn2CgOmz1epuM 34foQMiacmI8snj7qTbTE1C8mygQKJv8X6v2ehuZzJOejJmHn2nFuzg42+D+UgKigcln 4Z6JHGCphTLx77xbQLVDX47WJM2xCQBeQMOzzClGx7u88swa4AdudYr4QhRzbr7p+N8v ujcLMD4nLZIxcnc5//nCSQwIU58SVTFo/4XDCOnueC2hupuRk9QHlClxHSnuPTng8p0t KA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmeg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Aug 2023 04:16:18 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:16 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:15 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 51EFF3F708D; Wed, 23 Aug 2023 04:16:14 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Subject: [PATCH v5 06/12] dma/cnxk: chunk buffer failure return code Date: Wed, 23 Aug 2023 16:45:19 +0530 Message-ID: <20230823111525.3975662-6-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: HS6psNmw3ZUGwr-GR-sTT3oLH5C0xfFL X-Proofpoint-ORIG-GUID: HS6psNmw3ZUGwr-GR-sTT3oLH5C0xfFL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On chunk buffer alloc failure, ENOMEM is returned. As per DMA spec ENOSPC shall be returned on failure to allocate memory. This changeset fixes the same. Fixes: b56f1e2dad38 ("dma/cnxk: add channel operations") Cc: stable@dpdk.org Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 29 ++++++++++++++++++----------- 1 file changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 0db74b454d..aa6f6c710c 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -257,8 +257,7 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) { uint64_t *ptr = dpi->chunk_base; - if ((cmd_count < DPI_MIN_CMD_SIZE) || (cmd_count > DPI_MAX_CMD_SIZE) || - cmds == NULL) + if ((cmd_count < DPI_MIN_CMD_SIZE) || (cmd_count > DPI_MAX_CMD_SIZE) || cmds == NULL) return -EINVAL; /* @@ -274,11 +273,15 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) int count; uint64_t *new_buff = dpi->chunk_next; - dpi->chunk_next = - (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); + dpi->chunk_next = (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); if (!dpi->chunk_next) { - plt_err("Failed to alloc next buffer from NPA"); - return -ENOMEM; + plt_dp_dbg("Failed to alloc next buffer from NPA"); + + /* NPA failed to allocate a buffer. Restoring chunk_next + * to its original address. + */ + dpi->chunk_next = new_buff; + return -ENOSPC; } /* @@ -312,13 +315,17 @@ __dpi_queue_write(struct roc_dpi *dpi, uint64_t *cmds, int cmd_count) /* queue index may be greater than pool size */ if (dpi->chunk_head >= dpi->pool_size_m1) { new_buff = dpi->chunk_next; - dpi->chunk_next = - (void *)roc_npa_aura_op_alloc(dpi->aura_handle, - 0); + dpi->chunk_next = (void *)roc_npa_aura_op_alloc(dpi->aura_handle, 0); if (!dpi->chunk_next) { - plt_err("Failed to alloc next buffer from NPA"); - return -ENOMEM; + plt_dp_dbg("Failed to alloc next buffer from NPA"); + + /* NPA failed to allocate a buffer. Restoring chunk_next + * to its original address. + */ + dpi->chunk_next = new_buff; + return -ENOSPC; } + /* Write next buffer address */ *ptr = (uint64_t)new_buff; dpi->chunk_base = new_buff; From patchwork Wed Aug 23 11:15:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130683 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DDBAF430DF; Wed, 23 Aug 2023 13:16:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A219F4326C; Wed, 23 Aug 2023 13:16:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2683C43000 for ; Wed, 23 Aug 2023 13:16:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N7GRN5027416 for ; Wed, 23 Aug 2023 04:16:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=hpIAPQ0q+M/x27BSDiBQq3rEqIt1EQ7NBf2DtB4rA2I=; b=gB9Z/1UZIV211FTAUXTDH3JQE1cnyexRc2WeofgWqokHBbEXgpAtuwt/SYqAwmQEsLDu lx3Mfjas8sE+53qXfArJ4PBj5F/9j8B47yY6ZZucON93LRwTi8DocKdJEX+RI0JWisjA QcVkIWb1stlFX2sAqP9YSVlycH1YZ4geQsL4zTV75iy9S9l2iiPfFN4rVFL7jTArOXa3 pIx4JpvI4dxNn52x6tHfGvdprT34vWxqiIVHT2Q2hmwks+w+/SbMNCEIVD/SoEhAMbyF CheKuncdBIhofwhUXq5K//pQXkEBypRn0Qh48LaTbCJayAc0QoCfeyRYmCUH1ndtdnWk sw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20b2kt2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 23 Aug 2023 04:16:22 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:20 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id DA5453F70BC; Wed, 23 Aug 2023 04:16:17 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v5 07/12] dma/cnxk: add DMA devops for all models of cn10xxx Date: Wed, 23 Aug 2023 16:45:20 +0530 Message-ID: <20230823111525.3975662-7-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: EhxFmiUruj5VuDF5CNceRJhZqjnTMwtn X-Proofpoint-GUID: EhxFmiUruj5VuDF5CNceRJhZqjnTMwtn X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Valid function pointers are set for DMA device operations i.e. cn10k_dmadev_ops are used for all cn10k devices. Signed-off-by: Radha Mohan Chintakuntla Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index aa6f6c710c..b0de0cf215 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -762,7 +762,9 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_de dmadev->fp_obj->burst_capacity = cnxk_damdev_burst_capacity; if (pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KA || + pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KAS || pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CNF10KA || + pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CNF10KB || pci_dev->id.subsystem_device_id == PCI_SUBSYSTEM_DEVID_CN10KB) { dmadev->dev_ops = &cn10k_dmadev_ops; dmadev->fp_obj->copy = cn10k_dmadev_copy; From patchwork Wed Aug 23 11:15:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130684 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D5C07430DF; Wed, 23 Aug 2023 13:16:45 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BF9B043267; Wed, 23 Aug 2023 13:16:26 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 07D5C43272 for ; Wed, 23 Aug 2023 13:16:25 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N6xVJr012908 for ; Wed, 23 Aug 2023 04:16:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=GzvVQ4ipAT526+ApxWWbTRP/k2AAJDNpIEowC6spNUI=; b=Ycf7Aa5ZN3xjK3mZEQlTzb4ItiprOcqWHwVhKXU5nAbRHVKXlAeeocAxm13ccRLDQzi0 48poZhwoojEFaZtMpZkkd8cpEWOIMoe+/b6erSTJ6GEygroIwY/S1tn0ozi8gfxD4wxz hxqgeIyd7jkqxTMHjr6akDVLxplJy+S1Brue76dp5n1T5p9htzoMoF2PJdX6cBBRLdex PMlN7FonT1lI5EUTnP1p1nKP/8+3RwOtxO9mqqU5KT3hVraSQDvH7VxxhlQnv9p3qS/L w6aBYHWbPiQSEecg4W4n30XaDAKKO/LztkIR8Oik1tG0lHQmfpyWde+TWPAX/7bWWDP1 9A== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmes-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 23 Aug 2023 04:16:25 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:23 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 6B5273F708A; Wed, 23 Aug 2023 04:16:21 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v5 08/12] dma/cnxk: update func field based on transfer type Date: Wed, 23 Aug 2023 16:45:21 +0530 Message-ID: <20230823111525.3975662-8-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 4Kpyr7opk1iHM25shOifiBRbjI6Pd_lJ X-Proofpoint-ORIG-GUID: 4Kpyr7opk1iHM25shOifiBRbjI6Pd_lJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use pfid and vfid of src_port for incoming DMA transfers and dst_port for outgoing DMA transfers. Signed-off-by: Radha Mohan Chintakuntla Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 26 ++++++++++++++++++++++---- 1 file changed, 22 insertions(+), 4 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index b0de0cf215..4793c93ca8 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -84,13 +84,21 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.xtype = DPI_XTYPE_INBOUND; header->cn9k.lport = conf->src_port.pcie.coreid; header->cn9k.fport = 0; - header->cn9k.pvfe = 1; + header->cn9k.pvfe = conf->src_port.pcie.vfen; + if (header->cn9k.pvfe) { + header->cn9k.func = conf->src_port.pcie.pfid << 12; + header->cn9k.func |= conf->src_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_DEV: header->cn9k.xtype = DPI_XTYPE_OUTBOUND; header->cn9k.lport = 0; header->cn9k.fport = conf->dst_port.pcie.coreid; - header->cn9k.pvfe = 1; + header->cn9k.pvfe = conf->dst_port.pcie.vfen; + if (header->cn9k.pvfe) { + header->cn9k.func = conf->dst_port.pcie.pfid << 12; + header->cn9k.func |= conf->dst_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_MEM: header->cn9k.xtype = DPI_XTYPE_INTERNAL_ONLY; @@ -102,6 +110,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.xtype = DPI_XTYPE_EXTERNAL_ONLY; header->cn9k.lport = conf->src_port.pcie.coreid; header->cn9k.fport = conf->dst_port.pcie.coreid; + header->cn9k.pvfe = 0; }; max_desc = conf->nb_desc; @@ -159,13 +168,21 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.xtype = DPI_XTYPE_INBOUND; header->cn10k.lport = conf->src_port.pcie.coreid; header->cn10k.fport = 0; - header->cn10k.pvfe = 1; + header->cn10k.pvfe = conf->src_port.pcie.vfen; + if (header->cn10k.pvfe) { + header->cn10k.func = conf->src_port.pcie.pfid << 12; + header->cn10k.func |= conf->src_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_DEV: header->cn10k.xtype = DPI_XTYPE_OUTBOUND; header->cn10k.lport = 0; header->cn10k.fport = conf->dst_port.pcie.coreid; - header->cn10k.pvfe = 1; + header->cn10k.pvfe = conf->dst_port.pcie.vfen; + if (header->cn10k.pvfe) { + header->cn10k.func = conf->dst_port.pcie.pfid << 12; + header->cn10k.func |= conf->dst_port.pcie.vfid; + } break; case RTE_DMA_DIR_MEM_TO_MEM: header->cn10k.xtype = DPI_XTYPE_INTERNAL_ONLY; @@ -177,6 +194,7 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.xtype = DPI_XTYPE_EXTERNAL_ONLY; header->cn10k.lport = conf->src_port.pcie.coreid; header->cn10k.fport = conf->dst_port.pcie.coreid; + header->cn10k.pvfe = 0; }; max_desc = conf->nb_desc; From patchwork Wed Aug 23 11:15:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130685 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16B09430DF; Wed, 23 Aug 2023 13:16:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E186343275; Wed, 23 Aug 2023 13:16:30 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 61CFA43275 for ; Wed, 23 Aug 2023 13:16:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N7AkEd012861 for ; Wed, 23 Aug 2023 04:16:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/LM0GJgMLxrQjH9del6d7nABHlFYxrPkeXFrxaGnEtI=; b=FL7fbtN3bTeF+LWa9TpZ8qsXiBkXa0PlEA0qDUCr518jRT4nBr1aQbbOcTXNouFf30Y0 gz3I0b56/c4yytVbJm2f9oGuEwk7xSFD/iwb2GletwpZ9p9ztahxQgQNKE8MyCytKgHK XN7Tj6r/VK7uKgFNSiwli/rV+eMwmlSPKPCbIQyT3bW35jvF08BbgC5EekKrQWSWt0CW S4TxA9YLzL9Pe2+kg6wpLWsQDLynBAL5c84BF4JcNL3P1j40mdVHYLrk1+ZenOJYlwCP KrlGrmllLHfwi2gvuMkhXxWeHTpGvF+D8fHxxk3EI6HVhwIR2TiLhkD7dozIKAeluV/X Og== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmf3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 23 Aug 2023 04:16:28 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:26 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id AB8C53F708D; Wed, 23 Aug 2023 04:16:24 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla , Radha Mohan Chintakuntla Subject: [PATCH v5 09/12] dma/cnxk: increase vchan per queue to max 4 Date: Wed, 23 Aug 2023 16:45:22 +0530 Message-ID: <20230823111525.3975662-9-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: KgLw-zRv9f7eJ_Gr9-fxkjdMeQ3CY5iN X-Proofpoint-ORIG-GUID: KgLw-zRv9f7eJ_Gr9-fxkjdMeQ3CY5iN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To support multiple directions in same queue make use of multiple vchan per queue. Each vchan can be configured in some direction and used. Signed-off-by: Amit Prakash Shukla Signed-off-by: Radha Mohan Chintakuntla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 68 +++++++++++++++------------------- drivers/dma/cnxk/cnxk_dmadev.h | 11 +++--- 2 files changed, 36 insertions(+), 43 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 4793c93ca8..2193b4628f 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -22,8 +22,8 @@ cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_inf RTE_SET_USED(dev); RTE_SET_USED(size); - dev_info->max_vchans = 1; - dev_info->nb_vchans = 1; + dev_info->max_vchans = MAX_VCHANS_PER_QUEUE; + dev_info->nb_vchans = MAX_VCHANS_PER_QUEUE; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; @@ -65,13 +65,12 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; union dpi_instr_hdr_s *header = &dpi_conf->hdr; uint16_t max_desc; uint32_t size; int i; - RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) @@ -148,13 +147,12 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, const struct rte_dma_vchan_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct cnxk_dpi_conf *dpi_conf = &dpivf->conf; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; union dpi_instr_hdr_s *header = &dpi_conf->hdr; uint16_t max_desc; uint32_t size; int i; - RTE_SET_USED(vchan); RTE_SET_USED(conf_sz); @@ -359,18 +357,17 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -399,7 +396,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -420,18 +417,17 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge const struct rte_dma_sge *dst, uint16_t nb_src, uint16_t nb_dst, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn9k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); /* * For inbound case, src pointers are last pointers. @@ -467,7 +463,7 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -488,18 +484,17 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t uint32_t length, uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; rte_iova_t fptr, lptr; int num_words = 0; int rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -519,7 +514,7 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -541,18 +536,17 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge uint64_t flags) { struct cnxk_dpi_vf_s *dpivf = dev_private; - union dpi_instr_hdr_s *header = &dpivf->conf.hdr; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + union dpi_instr_hdr_s *header = &dpi_conf->hdr; const struct rte_dma_sge *fptr, *lptr; struct cnxk_dpi_compl_s *comp_ptr; uint64_t cmd[DPI_MAX_CMD_SIZE]; int num_words = 0; int i, rc; - RTE_SET_USED(vchan); - - comp_ptr = dpivf->conf.c_desc.compl_ptr[dpivf->conf.c_desc.tail]; + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; header->cn10k.ptr = (uint64_t)comp_ptr; - STRM_INC(dpivf->conf.c_desc, tail); + STRM_INC(dpi_conf->c_desc, tail); header->cn10k.nfst = nb_src & DPI_MAX_POINTER; header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; @@ -578,7 +572,7 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge rc = __dpi_queue_write(&dpivf->rdpi, cmd, num_words); if (unlikely(rc)) { - STRM_DEC(dpivf->conf.c_desc, tail); + STRM_DEC(dpi_conf->c_desc, tail); return rc; } @@ -599,12 +593,11 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, bool *has_error) { struct cnxk_dpi_vf_s *dpivf = dev_private; - struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpi_conf->c_desc; struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(vchan); - for (cnt = 0; cnt < nb_cpls; cnt++) { comp_ptr = c_desc->compl_ptr[c_desc->head]; @@ -632,11 +625,11 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n uint16_t *last_idx, enum rte_dma_status_code *status) { struct cnxk_dpi_vf_s *dpivf = dev_private; - struct cnxk_dpi_cdesc_data_s *c_desc = &dpivf->conf.c_desc; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + struct cnxk_dpi_cdesc_data_s *c_desc = &dpi_conf->c_desc; struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(vchan); RTE_SET_USED(last_idx); for (cnt = 0; cnt < nb_cpls; cnt++) { @@ -662,11 +655,10 @@ static uint16_t cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) { const struct cnxk_dpi_vf_s *dpivf = (const struct cnxk_dpi_vf_s *)dev_private; + const struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; uint16_t burst_cap; - RTE_SET_USED(vchan); - - burst_cap = dpivf->conf.c_desc.max_cnt - + burst_cap = dpi_conf->c_desc.max_cnt - ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; return burst_cap; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 9563295af0..4693960a19 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -6,10 +6,11 @@ #include -#define DPI_MAX_POINTER 15 -#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) -#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) -#define DPI_MAX_DESC 1024 +#define DPI_MAX_POINTER 15 +#define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) +#define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) +#define DPI_MAX_DESC 1024 +#define MAX_VCHANS_PER_QUEUE 4 /* Set Completion data to 0xFF when request submitted, * upon successful request completion engine reset to completion status @@ -39,7 +40,7 @@ struct cnxk_dpi_conf { struct cnxk_dpi_vf_s { struct roc_dpi rdpi; - struct cnxk_dpi_conf conf; + struct cnxk_dpi_conf conf[MAX_VCHANS_PER_QUEUE]; struct rte_dma_stats stats; uint16_t pending; uint16_t pnum_words; From patchwork Wed Aug 23 11:15:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130686 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE84A430DF; Wed, 23 Aug 2023 13:16:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0B6C1410F1; Wed, 23 Aug 2023 13:16:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1F836410F1 for ; Wed, 23 Aug 2023 13:16:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N7dvm5027509 for ; Wed, 23 Aug 2023 04:16:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=t95Zwlg0aiGYaV5p7u0OWa+rba1q9mAuwMK/t8MraVo=; b=GjXm/NXiP4pLiL1pNXQwyi8rbQtUKa9xlZO/wojVdTAKEzj8g/DVYj4h4QTK5RI/vDmR Korz9qMxBJbsDgoUmr2ALkR5x5QK2B/CWi2goXN5S8uR7QV2AKCCtRPgPAG22m+HJz+R NINy0yUCw/nP9Gt2B/eWRVd4DYkMocoTgswMMfBoM9QYJFeRZEYFXsgZ8Eovr2zNh2oE Wl9lSqBTYJho52+5Ytxw7mS/kaVsDWyIoTp0x6GCU3Tm/9CGkW5PlhLUy5xT3vaGscUF fjcztaInJPCnKOu/lobvYL0TNZcrIOLsRwklPU2IotZmRD45X4+54hFtihcJOhrdnwre gg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20b2ktc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 23 Aug 2023 04:16:31 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:29 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id D1DF83F708D; Wed, 23 Aug 2023 04:16:27 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Amit Prakash Shukla Subject: [PATCH v5 10/12] dma/cnxk: vchan support enhancement Date: Wed, 23 Aug 2023 16:45:23 +0530 Message-ID: <20230823111525.3975662-10-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 50yCygZuLi-eSvam_sAHG6R-MfhT9GGg X-Proofpoint-GUID: 50yCygZuLi-eSvam_sAHG6R-MfhT9GGg X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Code changes to realign dpi private structure based on vchan. Changeset also resets DMA dev stats while starting dma device. Signed-off-by: Amit Prakash Shukla --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 209 ++++++++++++++++++++++++--------- drivers/dma/cnxk/cnxk_dmadev.h | 18 +-- 2 files changed, 165 insertions(+), 62 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 2193b4628f..0b77543f6a 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -16,35 +16,79 @@ #include +static int cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan); + static int cnxk_dmadev_info_get(const struct rte_dma_dev *dev, struct rte_dma_info *dev_info, uint32_t size) { - RTE_SET_USED(dev); + struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; RTE_SET_USED(size); dev_info->max_vchans = MAX_VCHANS_PER_QUEUE; - dev_info->nb_vchans = MAX_VCHANS_PER_QUEUE; + dev_info->nb_vchans = dpivf->num_vchans; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_DEV_TO_DEV | RTE_DMA_CAPA_OPS_COPY | RTE_DMA_CAPA_OPS_COPY_SG; dev_info->max_desc = DPI_MAX_DESC; - dev_info->min_desc = 2; + dev_info->min_desc = DPI_MIN_DESC; dev_info->max_sges = DPI_MAX_POINTER; return 0; } +static int +cnxk_dmadev_vchan_free(struct cnxk_dpi_vf_s *dpivf, uint16_t vchan) +{ + struct cnxk_dpi_conf *dpi_conf; + uint16_t num_vchans; + uint16_t max_desc; + int i, j; + + if (vchan == RTE_DMA_ALL_VCHAN) { + num_vchans = dpivf->num_vchans; + i = 0; + } else { + if (vchan >= MAX_VCHANS_PER_QUEUE) + return -EINVAL; + + num_vchans = vchan + 1; + i = vchan; + } + + for (; i < num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + max_desc = dpi_conf->c_desc.max_cnt; + if (dpi_conf->c_desc.compl_ptr) { + for (j = 0; j < max_desc; j++) + rte_free(dpi_conf->c_desc.compl_ptr[j]); + } + + rte_free(dpi_conf->c_desc.compl_ptr); + dpi_conf->c_desc.compl_ptr = NULL; + } + + return 0; +} + static int cnxk_dmadev_configure(struct rte_dma_dev *dev, const struct rte_dma_conf *conf, uint32_t conf_sz) { struct cnxk_dpi_vf_s *dpivf = NULL; int rc = 0; - RTE_SET_USED(conf); RTE_SET_USED(conf_sz); dpivf = dev->fp_obj->dev_private; + /* Accept only number of vchans as config from application. */ + if (!(dpivf->flag & CNXK_DPI_DEV_START)) { + /* After config function, vchan setup function has to be called. + * Free up vchan memory if any, before configuring num_vchans. + */ + cnxk_dmadev_vchan_free(dpivf, RTE_DMA_ALL_VCHAN); + dpivf->num_vchans = conf->nb_vchans; + } + if (dpivf->flag & CNXK_DPI_DEV_CONFIG) return rc; @@ -73,7 +117,7 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, RTE_SET_USED(conf_sz); - if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + if (dpivf->flag & CNXK_DPI_DEV_START) return 0; header->cn9k.pt = DPI_HDR_PT_ZBW_CA; @@ -112,6 +156,9 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn9k.pvfe = 0; }; + /* Free up descriptor memory before allocating. */ + cnxk_dmadev_vchan_free(dpivf, vchan); + max_desc = conf->nb_desc; if (!rte_is_power_of_2(max_desc)) max_desc = rte_align32pow2(max_desc); @@ -130,14 +177,15 @@ cnxk_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, for (i = 0; i < max_desc; i++) { dpi_conf->c_desc.compl_ptr[i] = rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + if (!dpi_conf->c_desc.compl_ptr[i]) { + plt_err("Failed to allocate for descriptor memory"); + return -ENOMEM; + } + dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; } dpi_conf->c_desc.max_cnt = (max_desc - 1); - dpi_conf->c_desc.head = 0; - dpi_conf->c_desc.tail = 0; - dpivf->pending = 0; - dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -155,8 +203,7 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, RTE_SET_USED(conf_sz); - - if (dpivf->flag & CNXK_DPI_VCHAN_CONFIG) + if (dpivf->flag & CNXK_DPI_DEV_START) return 0; header->cn10k.pt = DPI_HDR_PT_ZBW_CA; @@ -195,6 +242,9 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, header->cn10k.pvfe = 0; }; + /* Free up descriptor memory before allocating. */ + cnxk_dmadev_vchan_free(dpivf, vchan); + max_desc = conf->nb_desc; if (!rte_is_power_of_2(max_desc)) max_desc = rte_align32pow2(max_desc); @@ -213,14 +263,14 @@ cn10k_dmadev_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, for (i = 0; i < max_desc; i++) { dpi_conf->c_desc.compl_ptr[i] = rte_zmalloc(NULL, sizeof(struct cnxk_dpi_compl_s), 0); + if (!dpi_conf->c_desc.compl_ptr[i]) { + plt_err("Failed to allocate for descriptor memory"); + return -ENOMEM; + } dpi_conf->c_desc.compl_ptr[i]->cdata = DPI_REQ_CDATA; } dpi_conf->c_desc.max_cnt = (max_desc - 1); - dpi_conf->c_desc.head = 0; - dpi_conf->c_desc.tail = 0; - dpivf->pending = 0; - dpivf->flag |= CNXK_DPI_VCHAN_CONFIG; return 0; } @@ -229,13 +279,27 @@ static int cnxk_dmadev_start(struct rte_dma_dev *dev) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + struct cnxk_dpi_conf *dpi_conf; + int i, j; if (dpivf->flag & CNXK_DPI_DEV_START) return 0; - dpivf->desc_idx = 0; - dpivf->pending = 0; - dpivf->pnum_words = 0; + for (i = 0; i < dpivf->num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + dpi_conf->c_desc.head = 0; + dpi_conf->c_desc.tail = 0; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; + dpi_conf->desc_idx = 0; + for (j = 0; j < dpi_conf->c_desc.max_cnt; j++) { + if (dpi_conf->c_desc.compl_ptr[j]) + dpi_conf->c_desc.compl_ptr[j]->cdata = DPI_REQ_CDATA; + } + + cnxk_stats_reset(dev, i); + } + roc_dpi_enable(&dpivf->rdpi); dpivf->flag |= CNXK_DPI_DEV_START; @@ -249,7 +313,6 @@ cnxk_dmadev_stop(struct rte_dma_dev *dev) struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; roc_dpi_disable(&dpivf->rdpi); - dpivf->flag &= ~CNXK_DPI_DEV_START; return 0; @@ -261,8 +324,10 @@ cnxk_dmadev_close(struct rte_dma_dev *dev) struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; roc_dpi_disable(&dpivf->rdpi); + cnxk_dmadev_vchan_free(dpivf, RTE_DMA_ALL_VCHAN); roc_dpi_dev_fini(&dpivf->rdpi); + /* Clear all flags as we close the device. */ dpivf->flag = 0; return 0; @@ -403,13 +468,13 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; + dpi_conf->stats.submitted++; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return (dpivf->desc_idx++); + return (dpi_conf->desc_idx++); } static int @@ -470,13 +535,13 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; + dpi_conf->stats.submitted += nb_src; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return (dpivf->desc_idx++); + return (dpi_conf->desc_idx++); } static int @@ -521,13 +586,13 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted++; + dpi_conf->stats.submitted++; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return dpivf->desc_idx++; + return dpi_conf->desc_idx++; } static int @@ -579,13 +644,13 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += nb_src; + dpi_conf->stats.submitted += nb_src; } else { - dpivf->pnum_words += num_words; - dpivf->pending++; + dpi_conf->pnum_words += num_words; + dpi_conf->pending++; } - return (dpivf->desc_idx++); + return (dpi_conf->desc_idx++); } static uint16_t @@ -605,7 +670,7 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, if (comp_ptr->cdata == DPI_REQ_CDATA) break; *has_error = 1; - dpivf->stats.errors++; + dpi_conf->stats.errors++; STRM_INC(*c_desc, head); break; } @@ -614,8 +679,8 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, STRM_INC(*c_desc, head); } - dpivf->stats.completed += cnt; - *last_idx = dpivf->stats.completed - 1; + dpi_conf->stats.completed += cnt; + *last_idx = dpi_conf->stats.completed - 1; return cnt; } @@ -639,14 +704,14 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n if (status[cnt] == DPI_REQ_CDATA) break; - dpivf->stats.errors++; + dpi_conf->stats.errors++; } comp_ptr->cdata = DPI_REQ_CDATA; STRM_INC(*c_desc, head); } - dpivf->stats.completed += cnt; - *last_idx = dpivf->stats.completed - 1; + dpi_conf->stats.completed += cnt; + *last_idx = dpi_conf->stats.completed - 1; return cnt; } @@ -659,26 +724,28 @@ cnxk_damdev_burst_capacity(const void *dev_private, uint16_t vchan) uint16_t burst_cap; burst_cap = dpi_conf->c_desc.max_cnt - - ((dpivf->stats.submitted - dpivf->stats.completed) + dpivf->pending) + 1; + ((dpi_conf->stats.submitted - dpi_conf->stats.completed) + dpi_conf->pending) + + 1; return burst_cap; } static int -cnxk_dmadev_submit(void *dev_private, uint16_t vchan __rte_unused) +cnxk_dmadev_submit(void *dev_private, uint16_t vchan) { struct cnxk_dpi_vf_s *dpivf = dev_private; - uint32_t num_words = dpivf->pnum_words; + struct cnxk_dpi_conf *dpi_conf = &dpivf->conf[vchan]; + uint32_t num_words = dpi_conf->pnum_words; - if (!dpivf->pnum_words) + if (!dpi_conf->pnum_words) return 0; rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpivf->stats.submitted += dpivf->pending; - dpivf->pnum_words = 0; - dpivf->pending = 0; + dpi_conf->stats.submitted += dpi_conf->pending; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; return 0; } @@ -688,25 +755,59 @@ cnxk_stats_get(const struct rte_dma_dev *dev, uint16_t vchan, struct rte_dma_sta uint32_t size) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; - struct rte_dma_stats *stats = &dpivf->stats; - - RTE_SET_USED(vchan); + struct cnxk_dpi_conf *dpi_conf; + int i; if (size < sizeof(rte_stats)) return -EINVAL; if (rte_stats == NULL) return -EINVAL; - *rte_stats = *stats; + /* Stats of all vchans requested. */ + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dpivf->num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + rte_stats->submitted += dpi_conf->stats.submitted; + rte_stats->completed += dpi_conf->stats.completed; + rte_stats->errors += dpi_conf->stats.errors; + } + + goto done; + } + + if (vchan >= MAX_VCHANS_PER_QUEUE) + return -EINVAL; + + dpi_conf = &dpivf->conf[vchan]; + *rte_stats = dpi_conf->stats; + +done: return 0; } static int -cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan __rte_unused) +cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) { struct cnxk_dpi_vf_s *dpivf = dev->fp_obj->dev_private; + struct cnxk_dpi_conf *dpi_conf; + int i; + + /* clear stats of all vchans. */ + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dpivf->num_vchans; i++) { + dpi_conf = &dpivf->conf[i]; + dpi_conf->stats = (struct rte_dma_stats){0}; + } + + return 0; + } + + if (vchan >= MAX_VCHANS_PER_QUEUE) + return -EINVAL; + + dpi_conf = &dpivf->conf[vchan]; + dpi_conf->stats = (struct rte_dma_stats){0}; - dpivf->stats = (struct rte_dma_stats){0}; return 0; } diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 4693960a19..f375143b16 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -10,6 +10,7 @@ #define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) #define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) #define DPI_MAX_DESC 1024 +#define DPI_MIN_DESC 2 #define MAX_VCHANS_PER_QUEUE 4 /* Set Completion data to 0xFF when request submitted, @@ -17,9 +18,8 @@ */ #define DPI_REQ_CDATA 0xFF -#define CNXK_DPI_DEV_CONFIG (1ULL << 0) -#define CNXK_DPI_VCHAN_CONFIG (1ULL << 1) -#define CNXK_DPI_DEV_START (1ULL << 2) +#define CNXK_DPI_DEV_CONFIG (1ULL << 0) +#define CNXK_DPI_DEV_START (1ULL << 1) struct cnxk_dpi_compl_s { uint64_t cdata; @@ -36,16 +36,18 @@ struct cnxk_dpi_cdesc_data_s { struct cnxk_dpi_conf { union dpi_instr_hdr_s hdr; struct cnxk_dpi_cdesc_data_s c_desc; + uint16_t pnum_words; + uint16_t pending; + uint16_t desc_idx; + uint16_t pad0; + struct rte_dma_stats stats; }; struct cnxk_dpi_vf_s { struct roc_dpi rdpi; struct cnxk_dpi_conf conf[MAX_VCHANS_PER_QUEUE]; - struct rte_dma_stats stats; - uint16_t pending; - uint16_t pnum_words; - uint16_t desc_idx; + uint16_t num_vchans; uint16_t flag; -}; +} __plt_cache_aligned; #endif From patchwork Wed Aug 23 11:15:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130687 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7429430DF; Wed, 23 Aug 2023 13:17:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7558243281; Wed, 23 Aug 2023 13:16:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3F3224327C; Wed, 23 Aug 2023 13:16:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N6xVJt012908; Wed, 23 Aug 2023 04:16:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=IHaex97cNCpGNx4F0p8dTJlBHPxHhj/r/dc17j8Hgio=; b=c51D34ctAXNH05aaHsMz4YIAEHem/RW0hf05m2fVdzs3Wj7xwv4/6v2vEd8l4z0sg/RQ zuq6+CsRaZkUdvKd9SybnsV92CNMoYauNqI2dJ95VoXCNoHaFhrP7FZxQCpab2/KM2jv /edwRc4Sfoh+1+QImznG1Y357zqIQs3JJUOZCY+MnHeKaeze5QjbDpImjSTjHAkDZ92Z 6YbTIWgPMItkTbcnBZF5y3wy+VLG1wnSj23CfQ6iFukySLuREJAMD14hyvcfGU3ai8Pu 3cR1mL7sV9lbXDT5NUKxcB+PTzlMUUEg9y5FNgN6RDqnPgm+nvyQBVLkflL3sm636LBR LQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmfa-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 23 Aug 2023 04:16:34 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:32 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id DABB03F708A; Wed, 23 Aug 2023 04:16:30 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , Subject: [PATCH v5 11/12] dma/cnxk: add completion ring tail wrap check Date: Wed, 23 Aug 2023 16:45:24 +0530 Message-ID: <20230823111525.3975662-11-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: shnTkqHGFDHRCQcTiLbq3tiqQHkKbz-U X-Proofpoint-ORIG-GUID: shnTkqHGFDHRCQcTiLbq3tiqQHkKbz-U X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vamsi Attunuru Adds a check to avoid tail wrap when completion desc ring is full. Also patch increase max desc size to 2048. Fixes: b56f1e2dad38 ("dma/cnxk: add channel operations") Fixes: 3340c3e22783 ("dma/cnxk: add scatter-gather copy") Fixes: 681851b347ad ("dma/cnxk: support CN10K DMA engine") Cc: stable@dpdk.org Signed-off-by: Vamsi Attunuru --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 22 ++++++++++++++++++++-- drivers/dma/cnxk/cnxk_dmadev.h | 2 +- 2 files changed, 21 insertions(+), 3 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 0b77543f6a..89ff4c18ac 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -434,6 +434,11 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d header->cn9k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + header->cn9k.nfst = 1; header->cn9k.nlst = 1; @@ -494,6 +499,11 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge header->cn9k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + /* * For inbound case, src pointers are last pointers. * For all other cases, src pointers are first pointers. @@ -561,6 +571,11 @@ cn10k_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t header->cn10k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + header->cn10k.nfst = 1; header->cn10k.nlst = 1; @@ -613,6 +628,11 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge header->cn10k.ptr = (uint64_t)comp_ptr; STRM_INC(dpi_conf->c_desc, tail); + if (unlikely(dpi_conf->c_desc.tail == dpi_conf->c_desc.head)) { + STRM_DEC(dpi_conf->c_desc, tail); + return -ENOSPC; + } + header->cn10k.nfst = nb_src & DPI_MAX_POINTER; header->cn10k.nlst = nb_dst & DPI_MAX_POINTER; fptr = &src[0]; @@ -695,8 +715,6 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n struct cnxk_dpi_compl_s *comp_ptr; int cnt; - RTE_SET_USED(last_idx); - for (cnt = 0; cnt < nb_cpls; cnt++) { comp_ptr = c_desc->compl_ptr[c_desc->head]; status[cnt] = comp_ptr->cdata; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index f375143b16..9c6c898d23 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -9,7 +9,7 @@ #define DPI_MAX_POINTER 15 #define STRM_INC(s, var) ((s).var = ((s).var + 1) & (s).max_cnt) #define STRM_DEC(s, var) ((s).var = ((s).var - 1) == -1 ? (s).max_cnt : ((s).var - 1)) -#define DPI_MAX_DESC 1024 +#define DPI_MAX_DESC 2048 #define DPI_MIN_DESC 2 #define MAX_VCHANS_PER_QUEUE 4 From patchwork Wed Aug 23 11:15:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 130688 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22935430DF; Wed, 23 Aug 2023 13:17:13 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B4F814325F; Wed, 23 Aug 2023 13:16:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id AF12D42D13 for ; Wed, 23 Aug 2023 13:16:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37N72F88016649 for ; Wed, 23 Aug 2023 04:16:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KxFuNahgXwPWhcKlq3YQo8VCb/edrSr+zZH4goJv0hM=; b=G5PBwCRbEwVtO7iFVnTtjPG7s4qARWqqfo37ivqMS56AmlbQAf+7B01WAcBai2Bu0lwA Dbky+BSS+RxedW6IdWe61Pi9a2pxbmMJsu49F1AYjzuQ1QfHYmLmrBCHoFHhmoqm9Zhn mcQ84CSU4KmfZSErfqLeY6NH+ckEDKHhiIP9DGU2/VfDwFlPFIKOWj8jaMJX2xesFDez 6jthVKF/eA3CxNZ8OxCSk1m4Z1iidZiBOD3j+SsMBKIxusWpFT6yBdDn84pEW2UXppU5 hIVOTdIxmqrocTkSeNloDk0q+MBNAwy2TECR+bgShrz9kPRU76CvOotHtsPPbk3ij+gh oQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3sn20ctmfg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 23 Aug 2023 04:16:39 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 23 Aug 2023 04:16:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 23 Aug 2023 04:16:36 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 9BCB93F708A; Wed, 23 Aug 2023 04:16:35 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , Subject: [PATCH v5 12/12] dma/cnxk: track last index return value Date: Wed, 23 Aug 2023 16:45:25 +0530 Message-ID: <20230823111525.3975662-12-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230823111525.3975662-1-amitprakashs@marvell.com> References: <20230821174942.3165191-1-amitprakashs@marvell.com> <20230823111525.3975662-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: oK_QHzhdbT_e9XiOVftTRDM9zYFeiHwq X-Proofpoint-ORIG-GUID: oK_QHzhdbT_e9XiOVftTRDM9zYFeiHwq X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-23_06,2023-08-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Vamsi Attunuru last index value might lost the order when dma stats are reset in between copy operations. Patch adds a variable to track the completed count, that can be used to compute the last index, also patch adds misc other changes. Signed-off-by: Vamsi Attunuru --- v2: - Fix for bugs observed in v1. - Squashed few commits. v3: - Resolved review suggestions. - Code improvement. v4: - Resolved checkpatch warnings. v5: - Updated commit message. - Split the commits. drivers/dma/cnxk/cnxk_dmadev.c | 17 ++++++++++------- drivers/dma/cnxk/cnxk_dmadev.h | 1 + 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 89ff4c18ac..eec6a897e2 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -298,6 +298,7 @@ cnxk_dmadev_start(struct rte_dma_dev *dev) } cnxk_stats_reset(dev, i); + dpi_conf->completed_offset = 0; } roc_dpi_enable(&dpivf->rdpi); @@ -479,7 +480,7 @@ cnxk_dmadev_copy(void *dev_private, uint16_t vchan, rte_iova_t src, rte_iova_t d dpi_conf->pending++; } - return (dpi_conf->desc_idx++); + return dpi_conf->desc_idx++; } static int @@ -545,13 +546,13 @@ cnxk_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpi_conf->stats.submitted += nb_src; + dpi_conf->stats.submitted++; } else { dpi_conf->pnum_words += num_words; dpi_conf->pending++; } - return (dpi_conf->desc_idx++); + return dpi_conf->desc_idx++; } static int @@ -664,13 +665,13 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge if (flags & RTE_DMA_OP_FLAG_SUBMIT) { rte_wmb(); plt_write64(num_words, dpivf->rdpi.rbase + DPI_VDMA_DBELL); - dpi_conf->stats.submitted += nb_src; + dpi_conf->stats.submitted++; } else { dpi_conf->pnum_words += num_words; dpi_conf->pending++; } - return (dpi_conf->desc_idx++); + return dpi_conf->desc_idx++; } static uint16_t @@ -700,7 +701,7 @@ cnxk_dmadev_completed(void *dev_private, uint16_t vchan, const uint16_t nb_cpls, } dpi_conf->stats.completed += cnt; - *last_idx = dpi_conf->stats.completed - 1; + *last_idx = (dpi_conf->completed_offset + dpi_conf->stats.completed - 1) & 0xffff; return cnt; } @@ -729,7 +730,7 @@ cnxk_dmadev_completed_status(void *dev_private, uint16_t vchan, const uint16_t n } dpi_conf->stats.completed += cnt; - *last_idx = dpi_conf->stats.completed - 1; + *last_idx = (dpi_conf->completed_offset + dpi_conf->stats.completed - 1) & 0xffff; return cnt; } @@ -814,6 +815,7 @@ cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) if (vchan == RTE_DMA_ALL_VCHAN) { for (i = 0; i < dpivf->num_vchans; i++) { dpi_conf = &dpivf->conf[i]; + dpi_conf->completed_offset += dpi_conf->stats.completed; dpi_conf->stats = (struct rte_dma_stats){0}; } @@ -824,6 +826,7 @@ cnxk_stats_reset(struct rte_dma_dev *dev, uint16_t vchan) return -EINVAL; dpi_conf = &dpivf->conf[vchan]; + dpi_conf->completed_offset += dpi_conf->stats.completed; dpi_conf->stats = (struct rte_dma_stats){0}; return 0; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 9c6c898d23..254e7fea20 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -41,6 +41,7 @@ struct cnxk_dpi_conf { uint16_t desc_idx; uint16_t pad0; struct rte_dma_stats stats; + uint64_t completed_offset; }; struct cnxk_dpi_vf_s {