From patchwork Sat Apr 3 14:17:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 90522 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9A23A0548; Sat, 3 Apr 2021 16:18:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 215E4140E1E; Sat, 3 Apr 2021 16:18:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 34E67140E1D for ; Sat, 3 Apr 2021 16:18:18 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 133EHPxU004836 for ; Sat, 3 Apr 2021 07:18:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=6m2bw8SgR16WsjRk1Rmjs3hOVJ5bVx/JJBtsyFJiX5Y=; b=KNQgSA7pqXuQnZqNng3vGCwIvNqblilp+jSlwhhEMRzhGyF4RgEOiYBPDrP0jnbReWHk dtA95w4MtCKLYgJbc9NRiptnduirXPzykQvYhwrdrRkKnfxZo3LpOKycGqT+NPr4pT/M n1juVmk+s1t3GNCH1RLrRmJ0EM17nzId1HwEHVjtJZqnimOHPHsv2o+6n5H8/3RtqiU0 61U8RrpRcxwPp5Ij1J0/LHPIdRTDlOF9JQYtekPPJh08F95qPgpI/B4UKSFO2MtxZkpO DvtUhSoouvKB7XrpGmnGhV1Dcchzn+O1dlBQcJuwwvbliKADwoECCzNwBbAlzljnQQaG wQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 37pnqqrbxw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 03 Apr 2021 07:18:17 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sat, 3 Apr 2021 07:18:15 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sat, 3 Apr 2021 07:18:15 -0700 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 768F53F703F; Sat, 3 Apr 2021 07:18:12 -0700 (PDT) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Sat, 3 Apr 2021 19:47:43 +0530 Message-ID: <20210403141751.215926-3-asekhar@marvell.com> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20210403141751.215926-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> <20210403141751.215926-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: l8WKFfUDilMygYB9lPbQB6ct-8ZOkoZ0 X-Proofpoint-GUID: l8WKFfUDilMygYB9lPbQB6ct-8ZOkoZ0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-04-03_05:2021-04-01, 2021-04-03 signatures=0 Subject: [dpdk-dev] [PATCH v2 03/11] mempool/cnxk: add generic ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add generic CNXk mempool ops which will enqueue/dequeue from pool one element at a time. Signed-off-by: Pavan Nikhilesh Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cnxk_mempool.h | 26 ++++ drivers/mempool/cnxk/cnxk_mempool_ops.c | 171 ++++++++++++++++++++++++ drivers/mempool/cnxk/meson.build | 3 +- 3 files changed, 199 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/cnxk_mempool.h create mode 100644 drivers/mempool/cnxk/cnxk_mempool_ops.c diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h new file mode 100644 index 0000000000..099b7f6998 --- /dev/null +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _CNXK_MEMPOOL_H_ +#define _CNXK_MEMPOOL_H_ + +#include + +unsigned int cnxk_mempool_get_count(const struct rte_mempool *mp); +ssize_t cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, + uint32_t obj_num, uint32_t pg_shift, + size_t *min_chunk_size, size_t *align); +int cnxk_mempool_populate(struct rte_mempool *mp, unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, + void *obj_cb_arg); +int cnxk_mempool_alloc(struct rte_mempool *mp); +void cnxk_mempool_free(struct rte_mempool *mp); + +int __rte_hot cnxk_mempool_enq(struct rte_mempool *mp, void *const *obj_table, + unsigned int n); +int __rte_hot cnxk_mempool_deq(struct rte_mempool *mp, void **obj_table, + unsigned int n); + +#endif diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c new file mode 100644 index 0000000000..2ce1816c04 --- /dev/null +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -0,0 +1,171 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include + +#include "roc_api.h" +#include "cnxk_mempool.h" + +int __rte_hot +cnxk_mempool_enq(struct rte_mempool *mp, void *const *obj_table, unsigned int n) +{ + unsigned int index; + + /* Ensure mbuf init changes are written before the free pointers + * are enqueued to the stack. + */ + rte_io_wmb(); + for (index = 0; index < n; index++) + roc_npa_aura_op_free(mp->pool_id, 0, + (uint64_t)obj_table[index]); + + return 0; +} + +int __rte_hot +cnxk_mempool_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + unsigned int index; + uint64_t obj; + + for (index = 0; index < n; index++, obj_table++) { + int retry = 4; + + /* Retry few times before failing */ + do { + obj = roc_npa_aura_op_alloc(mp->pool_id, 0); + } while (retry-- && (obj == 0)); + + if (obj == 0) { + cnxk_mempool_enq(mp, obj_table - index, index); + return -ENOENT; + } + *obj_table = (void *)obj; + } + + return 0; +} + +unsigned int +cnxk_mempool_get_count(const struct rte_mempool *mp) +{ + return (unsigned int)roc_npa_aura_op_available(mp->pool_id); +} + +ssize_t +cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, + uint32_t pg_shift, size_t *min_chunk_size, + size_t *align) +{ + size_t total_elt_sz; + + /* Need space for one more obj on each chunk to fulfill + * alignment requirements. + */ + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + return rte_mempool_op_calc_mem_size_helper( + mp, obj_num, pg_shift, total_elt_sz, min_chunk_size, align); +} + +int +cnxk_mempool_alloc(struct rte_mempool *mp) +{ + uint64_t aura_handle = 0; + struct npa_aura_s aura; + struct npa_pool_s pool; + uint32_t block_count; + size_t block_size; + int rc = -ERANGE; + + block_size = mp->elt_size + mp->header_size + mp->trailer_size; + block_count = mp->size; + if (mp->header_size % ROC_ALIGN != 0) { + plt_err("Header size should be multiple of %dB", ROC_ALIGN); + goto error; + } + + if (block_size % ROC_ALIGN != 0) { + plt_err("Block size should be multiple of %dB", ROC_ALIGN); + goto error; + } + + memset(&aura, 0, sizeof(struct npa_aura_s)); + memset(&pool, 0, sizeof(struct npa_pool_s)); + pool.nat_align = 1; + pool.buf_offset = mp->header_size / ROC_ALIGN; + + /* Use driver specific mp->pool_config to override aura config */ + if (mp->pool_config != NULL) + memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); + + rc = roc_npa_pool_create(&aura_handle, block_size, block_count, &aura, + &pool); + if (rc) { + plt_err("Failed to alloc pool or aura rc=%d", rc); + goto error; + } + + /* Store aura_handle for future queue operations */ + mp->pool_id = aura_handle; + plt_npa_dbg("block_sz=%lu block_count=%d aura_handle=0x%" PRIx64, + block_size, block_count, aura_handle); + + return 0; +error: + return rc; +} + +void +cnxk_mempool_free(struct rte_mempool *mp) +{ + int rc = 0; + + plt_npa_dbg("aura_handle=0x%" PRIx64, mp->pool_id); + rc = roc_npa_pool_destroy(mp->pool_id); + if (rc) + plt_err("Failed to free pool or aura rc=%d", rc); +} + +int +cnxk_mempool_populate(struct rte_mempool *mp, unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) +{ + size_t total_elt_sz, off; + int num_elts; + + if (iova == RTE_BAD_IOVA) + return -EINVAL; + + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + + /* Align object start address to a multiple of total_elt_sz */ + off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1); + + if (len < off) + return -EINVAL; + + vaddr = (char *)vaddr + off; + iova += off; + len -= off; + num_elts = len / total_elt_sz; + + plt_npa_dbg("iova %" PRIx64 ", aligned iova %" PRIx64 "", iova - off, + iova); + plt_npa_dbg("length %" PRIu64 ", aligned length %" PRIu64 "", + (uint64_t)(len + off), (uint64_t)len); + plt_npa_dbg("element size %" PRIu64 "", (uint64_t)total_elt_sz); + plt_npa_dbg("requested objects %" PRIu64 ", possible objects %" PRIu64 + "", (uint64_t)max_objs, (uint64_t)num_elts); + + roc_npa_aura_op_range_set(mp->pool_id, iova, + iova + num_elts * total_elt_sz); + + if (roc_npa_pool_range_update_check(mp->pool_id) < 0) + return -EBUSY; + + return rte_mempool_op_populate_helper( + mp, RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ, max_objs, vaddr, iova, + len, obj_cb, obj_cb_arg); +} diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build index 0be0802373..52244e728b 100644 --- a/drivers/mempool/cnxk/meson.build +++ b/drivers/mempool/cnxk/meson.build @@ -8,6 +8,7 @@ if not is_linux or not dpdk_conf.get('RTE_ARCH_64') subdir_done() endif -sources = files('cnxk_mempool.c') +sources = files('cnxk_mempool.c', + 'cnxk_mempool_ops.c') deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool']