From patchwork Fri Mar 5 16:21:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 88584 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81954A054F; Fri, 5 Mar 2021 19:13:31 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3857522A41D; Fri, 5 Mar 2021 19:13:26 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1403522A33F for ; Fri, 5 Mar 2021 17:22:00 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125GEttZ024240 for ; Fri, 5 Mar 2021 08:22:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=WclMfbmJNe/mHm2sSyNyFXMKtQ1WCb5KPkMUJe4UVpY=; b=QvkwR76PDHIrENYxvaEFJQOrDj35qn57d4uAVpYSyPnKPY9kD3dpxgfGcxaALdrW550O DNOFeP/NTb4dUjFRkpXX46r3R2sGX15WTdR84sHSws/fjF1o+4+TK/RMkZ71eajSQA4k RTv0Laq2kTsZmQV1SVc/NUKqJnokkwRQHGrOiOwjHfed4JuZzxp33HkX+lJ/xzzSAHvE GvVVpVvlorhh1NgMYoXv3tNI5PRLyTFPlg/vWxTPxAAp5lbzSGVaDXo7sLgKWAf/yeLy fwAoUwaF37DMNCEkEe8JmZ3x6DS0HdPsDreaNJxkngpFTEDbeffOq/+N8N01W8cbBX0Z vA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 372s2un6ft-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 08:22:00 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:21:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 08:21:58 -0800 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id A40493F703F; Fri, 5 Mar 2021 08:21:55 -0800 (PST) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Fri, 5 Mar 2021 21:51:44 +0530 Message-ID: <20210305162149.2196166-2-asekhar@marvell.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210305162149.2196166-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_10:2021-03-03, 2021-03-05 signatures=0 X-Mailman-Approved-At: Fri, 05 Mar 2021 19:13:23 +0100 Subject: [dpdk-dev] [PATCH 1/6] mempool/cnxk: add build infra and device probe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the meson based build infrastructure along with mempool device probe. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cnxk_mempool.c | 212 ++++++++++++++++++++++++++++ drivers/mempool/cnxk/cnxk_mempool.h | 12 ++ drivers/mempool/cnxk/meson.build | 29 ++++ drivers/mempool/cnxk/version.map | 3 + drivers/mempool/meson.build | 3 +- 5 files changed, 258 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/cnxk_mempool.c create mode 100644 drivers/mempool/cnxk/cnxk_mempool.h create mode 100644 drivers/mempool/cnxk/meson.build create mode 100644 drivers/mempool/cnxk/version.map diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c new file mode 100644 index 0000000000..c24497a6e5 --- /dev/null +++ b/drivers/mempool/cnxk/cnxk_mempool.c @@ -0,0 +1,212 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "roc_api.h" +#include "cnxk_mempool.h" + +#define CNXK_NPA_DEV_NAME RTE_STR(cnxk_npa_dev_) +#define CNXK_NPA_DEV_NAME_LEN (sizeof(CNXK_NPA_DEV_NAME) + PCI_PRI_STR_SIZE) +#define CNXK_NPA_MAX_POOLS_PARAM "max_pools" + +uintptr_t *cnxk_mempool_internal_data; + +static inline uint32_t +npa_aura_size_to_u32(uint8_t val) +{ + if (val == NPA_AURA_SZ_0) + return 128; + if (val >= NPA_AURA_SZ_MAX) + return BIT_ULL(20); + + return 1 << (val + 6); +} + +static int +parse_max_pools(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val < npa_aura_size_to_u32(NPA_AURA_SZ_128)) + val = 128; + if (val > npa_aura_size_to_u32(NPA_AURA_SZ_1M)) + val = BIT_ULL(20); + + *(uint8_t *)extra_args = rte_log2_u32(val) - 6; + return 0; +} + +static inline uint8_t +parse_aura_size(struct rte_devargs *devargs) +{ + uint8_t aura_sz = NPA_AURA_SZ_128; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto exit; + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, CNXK_NPA_MAX_POOLS_PARAM, &parse_max_pools, + &aura_sz); + rte_kvargs_free(kvlist); +exit: + return aura_sz; +} + +static inline char * +npa_dev_to_name(struct rte_pci_device *pci_dev, char *name) +{ + snprintf(name, CNXK_NPA_DEV_NAME_LEN, CNXK_NPA_DEV_NAME PCI_PRI_FMT, + pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, + pci_dev->addr.function); + + return name; +} + +static int +npa_init(struct rte_pci_device *pci_dev) +{ + char name[CNXK_NPA_DEV_NAME_LEN]; + size_t idata_offset, idata_sz; + const struct rte_memzone *mz; + struct roc_npa *dev; + int rc, maxpools; + + rc = plt_init(); + if (rc < 0) + goto error; + + maxpools = parse_aura_size(pci_dev->device.devargs); + /* Add the space for per-pool internal data pointers to memzone len */ + idata_offset = RTE_ALIGN_CEIL(sizeof(*dev), ROC_ALIGN); + idata_sz = maxpools * sizeof(uintptr_t); + + rc = -ENOMEM; + mz = rte_memzone_reserve_aligned(npa_dev_to_name(pci_dev, name), + idata_offset + idata_sz, SOCKET_ID_ANY, + 0, RTE_CACHE_LINE_SIZE); + if (mz == NULL) + goto error; + + dev = mz->addr; + dev->pci_dev = pci_dev; + cnxk_mempool_internal_data = (uintptr_t *)(mz->addr_64 + idata_offset); + memset(cnxk_mempool_internal_data, 0, idata_sz); + + roc_idev_npa_maxpools_set(maxpools); + rc = roc_npa_dev_init(dev); + if (rc) + goto mz_free; + + return 0; + +mz_free: + rte_memzone_free(mz); +error: + plt_err("failed to initialize npa device rc=%d", rc); + return rc; +} + +static int +npa_fini(struct rte_pci_device *pci_dev) +{ + char name[CNXK_NPA_DEV_NAME_LEN]; + const struct rte_memzone *mz; + int rc; + + mz = rte_memzone_lookup(npa_dev_to_name(pci_dev, name)); + if (mz == NULL) + return -EINVAL; + + rc = roc_npa_dev_fini(mz->addr); + if (rc) { + if (rc != -EAGAIN) + plt_err("Failed to remove npa dev, rc=%d", rc); + return rc; + } + rte_memzone_free(mz); + + return 0; +} + +static int +npa_remove(struct rte_pci_device *pci_dev) +{ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return npa_fini(pci_dev); +} + +static int +npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + RTE_SET_USED(pci_drv); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return npa_init(pci_dev); +} + +static const struct rte_pci_id npa_pci_map[] = { + { + .class_id = RTE_CLASS_ANY_ID, + .vendor_id = PCI_VENDOR_ID_CAVIUM, + .device_id = PCI_DEVID_CNXK_RVU_NPA_PF, + .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, + .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA, + }, + { + .class_id = RTE_CLASS_ANY_ID, + .vendor_id = PCI_VENDOR_ID_CAVIUM, + .device_id = PCI_DEVID_CNXK_RVU_NPA_PF, + .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, + .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS, + }, + { + .class_id = RTE_CLASS_ANY_ID, + .vendor_id = PCI_VENDOR_ID_CAVIUM, + .device_id = PCI_DEVID_CNXK_RVU_NPA_VF, + .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, + .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA, + }, + { + .class_id = RTE_CLASS_ANY_ID, + .vendor_id = PCI_VENDOR_ID_CAVIUM, + .device_id = PCI_DEVID_CNXK_RVU_NPA_VF, + .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM, + .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS, + }, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver npa_pci = { + .id_table = npa_pci_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA, + .probe = npa_probe, + .remove = npa_remove, +}; + +RTE_PMD_REGISTER_PCI(mempool_cnxk, npa_pci); +RTE_PMD_REGISTER_PCI_TABLE(mempool_cnxk, npa_pci_map); +RTE_PMD_REGISTER_KMOD_DEP(mempool_cnxk, "vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(mempool_cnxk, + CNXK_NPA_MAX_POOLS_PARAM "=<128-1048576>"); diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h new file mode 100644 index 0000000000..4ee3d236f2 --- /dev/null +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#ifndef _CNXK_MEMPOOL_H_ +#define _CNXK_MEMPOOL_H_ + +#include + +extern uintptr_t *cnxk_mempool_internal_data; + +#endif diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build new file mode 100644 index 0000000000..23a171c143 --- /dev/null +++ b/drivers/mempool/cnxk/meson.build @@ -0,0 +1,29 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2021 Marvell. +# + +if is_windows + build = false + reason = 'not supported on Windows' + subdir_done() +endif +if not dpdk_conf.get('RTE_ARCH_64') + build = false + reason = 'only supported on 64-bit' + subdir_done() +endif + +sources = files('cnxk_mempool.c') + +deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] + +cflags_options = [ + '-Wno-strict-prototypes', + '-Werror' +] + +foreach option:cflags_options + if cc.has_argument(option) + cflags += option + endif +endforeach diff --git a/drivers/mempool/cnxk/version.map b/drivers/mempool/cnxk/version.map new file mode 100644 index 0000000000..ee80c51721 --- /dev/null +++ b/drivers/mempool/cnxk/version.map @@ -0,0 +1,3 @@ +INTERNAL { + local: *; +}; diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build index 4428813dae..a2814c1dfa 100644 --- a/drivers/mempool/meson.build +++ b/drivers/mempool/meson.build @@ -1,5 +1,6 @@ # SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2017 Intel Corporation -drivers = ['bucket', 'dpaa', 'dpaa2', 'octeontx', 'octeontx2', 'ring', 'stack'] +drivers = ['bucket', 'cnxk', 'dpaa', 'dpaa2', 'octeontx', 'octeontx2', 'ring', + 'stack'] std_deps = ['mempool'] From patchwork Fri Mar 5 16:21:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 88585 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C4D5FA054F; Fri, 5 Mar 2021 19:13:38 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 57C5622A419; Fri, 5 Mar 2021 19:13:27 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 61AC122A33E for ; Fri, 5 Mar 2021 17:22:04 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125GEonG023625 for ; Fri, 5 Mar 2021 08:22:03 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=UPasPuqy6JxdXT7ZES/FeKHaFy2kgLWPHr3I+Jr0fBA=; b=RbyBJTYPYZ/AvzZz7pjMWmOb84GUKiTYt769Srv7xiu9xncblme/gwtov3qKr0+IS4Kq VV7/dQ35r+5A0ER8jnSq+r20dTtPdZ6A2f/UbghDZsAgRUhgwFDeDIGwVVKHkYhy0vla AOoDOjr/YBvIsSoONYfXOm7jXsyCVFfPgNPFdc2Tuc+b2+mRiYA5n6d9s6bS/y/BSXYr 2TJBlU7xoagpHECKOU3a7sTNd+AWhpiycz8HHY+Okf/vsi9uoA9AG6yB7RzfyCqbcSb9 DyoaLYtbY459/Ub6mf2IqrFAvhaTTJE4GxlgtwhhgfN5EqpL7MexfVcrFSlB6oKvJbEz wg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 372s2un6fy-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 08:22:03 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:02 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:01 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 08:22:01 -0800 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1EA463F7041; Fri, 5 Mar 2021 08:21:58 -0800 (PST) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Fri, 5 Mar 2021 21:51:45 +0530 Message-ID: <20210305162149.2196166-3-asekhar@marvell.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210305162149.2196166-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_10:2021-03-03, 2021-03-05 signatures=0 X-Mailman-Approved-At: Fri, 05 Mar 2021 19:13:23 +0100 Subject: [dpdk-dev] [PATCH 2/6] mempool/cnxk: add generic ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add generic cnxk mempool ops. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cnxk_mempool.h | 16 +++ drivers/mempool/cnxk/cnxk_mempool_ops.c | 173 ++++++++++++++++++++++++ drivers/mempool/cnxk/meson.build | 3 +- 3 files changed, 191 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/cnxk_mempool_ops.c diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 4ee3d236f2..8f226f861c 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -7,6 +7,22 @@ #include +unsigned int cnxk_mempool_get_count(const struct rte_mempool *mp); +ssize_t cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, + uint32_t obj_num, uint32_t pg_shift, + size_t *min_chunk_size, size_t *align); +int cnxk_mempool_populate(struct rte_mempool *mp, unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, + void *obj_cb_arg); +int cnxk_mempool_alloc(struct rte_mempool *mp); +void cnxk_mempool_free(struct rte_mempool *mp); + +int __rte_hot cnxk_mempool_enq(struct rte_mempool *mp, void *const *obj_table, + unsigned int n); +int __rte_hot cnxk_mempool_deq(struct rte_mempool *mp, void **obj_table, + unsigned int n); + extern uintptr_t *cnxk_mempool_internal_data; #endif diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c new file mode 100644 index 0000000000..29a4c12208 --- /dev/null +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -0,0 +1,173 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include + +#include "roc_api.h" +#include "cnxk_mempool.h" + +int __rte_hot +cnxk_mempool_enq(struct rte_mempool *mp, void *const *obj_table, unsigned int n) +{ + unsigned int index; + + /* Ensure mbuf init changes are written before the free pointers + * are enqueued to the stack. + */ + rte_io_wmb(); + for (index = 0; index < n; index++) + roc_npa_aura_op_free(mp->pool_id, 0, + (uint64_t)obj_table[index]); + + return 0; +} + +int __rte_hot +cnxk_mempool_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + unsigned int index; + uint64_t obj; + + for (index = 0; index < n; index++, obj_table++) { + int retry = 4; + + /* Retry few times before failing */ + do { + obj = roc_npa_aura_op_alloc(mp->pool_id, 0); + } while (retry-- && (obj == 0)); + + if (obj == 0) { + cnxk_mempool_enq(mp, obj_table - index, index); + return -ENOENT; + } + *obj_table = (void *)obj; + } + + return 0; +} + +unsigned int +cnxk_mempool_get_count(const struct rte_mempool *mp) +{ + return (unsigned int)roc_npa_aura_op_available(mp->pool_id); +} + +ssize_t +cnxk_mempool_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, + uint32_t pg_shift, size_t *min_chunk_size, + size_t *align) +{ + size_t total_elt_sz; + + /* Need space for one more obj on each chunk to fulfill + * alignment requirements. + */ + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + return rte_mempool_op_calc_mem_size_helper( + mp, obj_num, pg_shift, total_elt_sz, min_chunk_size, align); +} + +int +cnxk_mempool_alloc(struct rte_mempool *mp) +{ + uint64_t aura_handle = 0; + struct npa_aura_s aura; + struct npa_pool_s pool; + uint32_t block_count; + size_t block_size; + int rc = -ERANGE; + + block_size = mp->elt_size + mp->header_size + mp->trailer_size; + block_count = mp->size; + if (mp->header_size % ROC_ALIGN != 0) { + plt_err("Header size should be multiple of %dB", ROC_ALIGN); + goto error; + } + + if (block_size % ROC_ALIGN != 0) { + plt_err("Block size should be multiple of %dB", ROC_ALIGN); + goto error; + } + + memset(&aura, 0, sizeof(struct npa_aura_s)); + memset(&pool, 0, sizeof(struct npa_pool_s)); + pool.nat_align = 1; + /* TODO: Check whether to allow buf_offset > 1 ?? */ + pool.buf_offset = mp->header_size / ROC_ALIGN; + + /* Use driver specific mp->pool_config to override aura config */ + if (mp->pool_config != NULL) + memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s)); + + rc = roc_npa_pool_create(&aura_handle, block_size, block_count, &aura, + &pool); + if (rc) { + plt_err("Failed to alloc pool or aura rc=%d", rc); + goto error; + } + + /* Store aura_handle for future queue operations */ + mp->pool_id = aura_handle; + plt_npa_dbg("block_sz=%lu block_count=%d aura_handle=0x%" PRIx64, + block_size, block_count, aura_handle); + + return 0; +error: + return rc; +} + +void +cnxk_mempool_free(struct rte_mempool *mp) +{ + int rc = 0; + + plt_npa_dbg("aura_handle=0x%" PRIx64, mp->pool_id); + rc = roc_npa_pool_destroy(mp->pool_id); + if (rc) + plt_err("Failed to free pool or aura rc=%d", rc); +} + +int +cnxk_mempool_populate(struct rte_mempool *mp, unsigned int max_objs, + void *vaddr, rte_iova_t iova, size_t len, + rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) +{ + size_t total_elt_sz, off; + int num_elts; + + if (iova == RTE_BAD_IOVA) + return -EINVAL; + + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; + + /* Align object start address to a multiple of total_elt_sz */ + off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1); + + if (len < off) + return -EINVAL; + + vaddr = (char *)vaddr + off; + iova += off; + len -= off; + num_elts = len / total_elt_sz; + + plt_npa_dbg("iova %" PRIx64 ", aligned iova %" PRIx64 "", iova - off, + iova); + plt_npa_dbg("length %" PRIu64 ", aligned length %" PRIu64 "", + (uint64_t)(len + off), (uint64_t)len); + plt_npa_dbg("element size %" PRIu64 "", (uint64_t)total_elt_sz); + plt_npa_dbg("requested objects %" PRIu64 ", possible objects %" PRIu64 + "", (uint64_t)max_objs, (uint64_t)num_elts); + plt_npa_dbg("L1D set distribution :"); + + roc_npa_aura_op_range_set(mp->pool_id, iova, + iova + num_elts * total_elt_sz); + + if (roc_npa_pool_range_update_check(mp->pool_id) < 0) + return -EBUSY; + + return rte_mempool_op_populate_helper( + mp, RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ, max_objs, vaddr, iova, + len, obj_cb, obj_cb_arg); +} diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build index 23a171c143..b9a810e021 100644 --- a/drivers/mempool/cnxk/meson.build +++ b/drivers/mempool/cnxk/meson.build @@ -13,7 +13,8 @@ if not dpdk_conf.get('RTE_ARCH_64') subdir_done() endif -sources = files('cnxk_mempool.c') +sources = files('cnxk_mempool.c', + 'cnxk_mempool_ops.c') deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] From patchwork Fri Mar 5 16:21:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 88586 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB76BA054F; Fri, 5 Mar 2021 19:13:45 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A2AA822A430; Fri, 5 Mar 2021 19:13:28 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 128AE22A3EA for ; Fri, 5 Mar 2021 17:22:06 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125GEtta024240 for ; Fri, 5 Mar 2021 08:22:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=GpMY5QDCWzIgx+j6PBJ6nU97aWdbUqq1Q/BPu2Vwq74=; b=RiVmZz6zaDb/S6S1KNj/2DJrlfFY/aKY2MWzGGsZ5B//9xlqh4ptWtk222VgOh8o2d1N fOT+sZ3bOpqaFJyWpBTU4Cw5vpvi8ekjX+VsNuhguqDEinmdHQRPF3JBH001WuwnrRL1 wm2VTIkrDelXQ6iOnwdg1RKi+mquG3YTOjVQ58AW6nVC8E+L4u9bFvwr3lkhmkYnaw6+ jR+s+C9kBvI1CCIJ0SOh5Bt84DedOCsW8oM8YwyZUyoI+Gl8fvhf7z0q9TAnka/lGyMA YIUsMLj53EDtCCoKsX5fneLew9mV78xDId+GSWBP1uYlty2Do9DFhB5oDZ/FBP1nOC9e 2w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 372s2un6gb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 08:22:06 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:04 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 08:22:04 -0800 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1D6383F7041; Fri, 5 Mar 2021 08:22:01 -0800 (PST) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Fri, 5 Mar 2021 21:51:46 +0530 Message-ID: <20210305162149.2196166-4-asekhar@marvell.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210305162149.2196166-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_10:2021-03-03, 2021-03-05 signatures=0 X-Mailman-Approved-At: Fri, 05 Mar 2021 19:13:23 +0100 Subject: [dpdk-dev] [PATCH 3/6] mempool/cnxk: add cn9k mempool ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add mempool ops specific to cn9k. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cn9k_mempool_ops.c | 90 +++++++++++++++++++++++++ drivers/mempool/cnxk/meson.build | 3 +- 2 files changed, 92 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/cn9k_mempool_ops.c diff --git a/drivers/mempool/cnxk/cn9k_mempool_ops.c b/drivers/mempool/cnxk/cn9k_mempool_ops.c new file mode 100644 index 0000000000..3a7de39db2 --- /dev/null +++ b/drivers/mempool/cnxk/cn9k_mempool_ops.c @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include + +#include "roc_api.h" +#include "cnxk_mempool.h" + +static int __rte_hot +cn9k_mempool_enq(struct rte_mempool *mp, void *const *obj_table, unsigned int n) +{ + /* Ensure mbuf init changes are written before the free pointers + * are enqueued to the stack. + */ + rte_io_wmb(); + roc_npa_aura_op_bulk_free(mp->pool_id, (const uint64_t *)obj_table, n, + 0); + + return 0; +} + +static inline int __rte_hot +cn9k_mempool_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + unsigned int count; + + count = roc_npa_aura_op_bulk_alloc(mp->pool_id, (uint64_t *)obj_table, + n, 0, 1); + + if (unlikely(count != n)) { + /* If bulk alloc failed to allocate all pointers, try + * allocating remaining pointers with the default alloc + * with retry scheme. + */ + if (cnxk_mempool_deq(mp, &obj_table[count], n - count)) { + cn9k_mempool_enq(mp, obj_table, count); + return -ENOENT; + } + } + + return 0; +} + +static int +cn9k_mempool_alloc(struct rte_mempool *mp) +{ + size_t block_size, padding; + + block_size = mp->elt_size + mp->header_size + mp->trailer_size; + /* Align header size to ROC_ALIGN */ + if (mp->header_size % ROC_ALIGN != 0) { + padding = RTE_ALIGN_CEIL(mp->header_size, ROC_ALIGN) - + mp->header_size; + mp->header_size += padding; + block_size += padding; + } + + /* Align block size to ROC_ALIGN */ + if (block_size % ROC_ALIGN != 0) { + padding = RTE_ALIGN_CEIL(block_size, ROC_ALIGN) - block_size; + mp->trailer_size += padding; + block_size += padding; + } + + /* + * OCTEON TX2 has 8 sets, 41 ways L1D cache, VA<9:7> bits dictate + * the set selection. + * Add additional padding to ensure that the element size always + * occupies odd number of cachelines to ensure even distribution + * of elements among L1D cache sets. + */ + padding = ((block_size / ROC_ALIGN) % 2) ? 0 : ROC_ALIGN; + mp->trailer_size += padding; + + return cnxk_mempool_alloc(mp); +} + +static struct rte_mempool_ops cn9k_mempool_ops = { + .name = "cn9k_mempool_ops", + .alloc = cn9k_mempool_alloc, + .free = cnxk_mempool_free, + .enqueue = cn9k_mempool_enq, + .dequeue = cn9k_mempool_deq, + .get_count = cnxk_mempool_get_count, + .calc_mem_size = cnxk_mempool_calc_mem_size, + .populate = cnxk_mempool_populate, +}; + +MEMPOOL_REGISTER_OPS(cn9k_mempool_ops); diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build index b9a810e021..4ce865e18b 100644 --- a/drivers/mempool/cnxk/meson.build +++ b/drivers/mempool/cnxk/meson.build @@ -14,7 +14,8 @@ if not dpdk_conf.get('RTE_ARCH_64') endif sources = files('cnxk_mempool.c', - 'cnxk_mempool_ops.c') + 'cnxk_mempool_ops.c', + 'cn9k_mempool_ops.c') deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] From patchwork Fri Mar 5 16:21:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 88587 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3EF41A054F; Fri, 5 Mar 2021 19:13:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA12C22A43C; Fri, 5 Mar 2021 19:13:29 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3574222A33E for ; Fri, 5 Mar 2021 17:22:10 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125GEpbL023633 for ; Fri, 5 Mar 2021 08:22:09 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=dxaTSpX+pYpMfM+QYGQzZTkfC/jr3OoAHDvsgHDRoPY=; b=Ss11lJHJp35oA8ku2Pz/fPX2NFiisjYeLNUmF0OvIaiA5v1HU6EXiJLL/qS4DhWqq5oP c5ekTK5v+57p1dByPZhB3Le7lPs0SXjftuI3J4OpK9f4CbLHliCSQ3Z+FYjcdgnjatVv 5uzU8+QuBl+5rHLR9qE8IuBF5wgCHSdnWA7i0l2v71ZmjS9Ykvlc4lnf1pOFFoW0yOKF CCA8TIs6ykt8U8I50689vmW2VtqyIx27KYgKEYz4dF0lOqyMyftKoAA0/tfODJnTqflH 5EletX68kL3ydGl2wDzlQvWHtOiET6ko9fB0mLWq3L68nWZNqFsj0oRpYkCfI6kDGkxU qg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 372s2un6gn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 08:22:09 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:08 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:07 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 08:22:07 -0800 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1C2923F7043; Fri, 5 Mar 2021 08:22:04 -0800 (PST) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Fri, 5 Mar 2021 21:51:47 +0530 Message-ID: <20210305162149.2196166-5-asekhar@marvell.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210305162149.2196166-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_10:2021-03-03, 2021-03-05 signatures=0 X-Mailman-Approved-At: Fri, 05 Mar 2021 19:13:23 +0100 Subject: [dpdk-dev] [PATCH 4/6] mempool/cnxk: add base cn10k mempool ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add base cn10k mempool ops. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cn10k_mempool_ops.c | 46 ++++++++++++++++++++++++ drivers/mempool/cnxk/meson.build | 3 +- 2 files changed, 48 insertions(+), 1 deletion(-) create mode 100644 drivers/mempool/cnxk/cn10k_mempool_ops.c diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c new file mode 100644 index 0000000000..fc7592fd94 --- /dev/null +++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2021 Marvell. + */ + +#include + +#include "roc_api.h" +#include "cnxk_mempool.h" + +static int +cn10k_mempool_alloc(struct rte_mempool *mp) +{ + uint32_t block_size; + size_t padding; + + block_size = mp->elt_size + mp->header_size + mp->trailer_size; + /* Align header size to ROC_ALIGN */ + if (mp->header_size % ROC_ALIGN != 0) { + padding = RTE_ALIGN_CEIL(mp->header_size, ROC_ALIGN) - + mp->header_size; + mp->header_size += padding; + block_size += padding; + } + + /* Align block size to ROC_ALIGN */ + if (block_size % ROC_ALIGN != 0) { + padding = RTE_ALIGN_CEIL(block_size, ROC_ALIGN) - block_size; + mp->trailer_size += padding; + block_size += padding; + } + + return cnxk_mempool_alloc(mp); +} + +static struct rte_mempool_ops cn10k_mempool_ops = { + .name = "cn10k_mempool_ops", + .alloc = cn10k_mempool_alloc, + .free = cnxk_mempool_free, + .enqueue = cnxk_mempool_enq, + .dequeue = cnxk_mempool_deq, + .get_count = cnxk_mempool_get_count, + .calc_mem_size = cnxk_mempool_calc_mem_size, + .populate = cnxk_mempool_populate, +}; + +MEMPOOL_REGISTER_OPS(cn10k_mempool_ops); diff --git a/drivers/mempool/cnxk/meson.build b/drivers/mempool/cnxk/meson.build index 4ce865e18b..46f502bf3a 100644 --- a/drivers/mempool/cnxk/meson.build +++ b/drivers/mempool/cnxk/meson.build @@ -15,7 +15,8 @@ endif sources = files('cnxk_mempool.c', 'cnxk_mempool_ops.c', - 'cn9k_mempool_ops.c') + 'cn9k_mempool_ops.c', + 'cn10k_mempool_ops.c') deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_cnxk', 'mempool'] From patchwork Fri Mar 5 16:21:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 88588 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F40CFA054F; Fri, 5 Mar 2021 19:13:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 33DE122A44B; Fri, 5 Mar 2021 19:13:31 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D286422A33E for ; Fri, 5 Mar 2021 17:22:13 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125GFwHk020271 for ; Fri, 5 Mar 2021 08:22:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=JqbVn/tfKhXxPJFh8EAgGNypuGijDdnt1i0Vc+mAl4U=; b=iObwpdzgiewiqBeDPnoxVjeExQwqstw7tLLbLxb6UhN4OLIr+rbnazl/SkwZ3DfY99Kk mq9GTveC8d0HebSH379PELIV7aqp/0ph4+snsbidENP6DO/3t0+L2IzGSaHY2FvMy26j jaupu8TNh/p2zaLzjU8Z5E0ETd8XM8nWuHh8DwG2gyDMAEm/xc7fYrHYr0axJ7pWy+4Z s9PkLtvFq+vWsZCWlUnTTJn7b3vAR3ttJoGfMrhdXCEnyDrGRo/Vx5Z9gmIt3A/XvBKK 4qjDamtN4VY6jNJH4Aq9z8Ce75rOEh1FCg/H0YJ4+ISzkzownx+dvk9/eOkFH7fNjkX1 Rg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 370p7p0tya-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 08:22:13 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:11 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:10 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 08:22:10 -0800 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1C25C3F703F; Fri, 5 Mar 2021 08:22:07 -0800 (PST) From: Ashwin Sekhar T K To: CC: , , , , , , Date: Fri, 5 Mar 2021 21:51:48 +0530 Message-ID: <20210305162149.2196166-6-asekhar@marvell.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210305162149.2196166-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_10:2021-03-03, 2021-03-05 signatures=0 X-Mailman-Approved-At: Fri, 05 Mar 2021 19:13:23 +0100 Subject: [dpdk-dev] [PATCH 5/6] mempool/cnxk: add cn10k batch enqueue/dequeue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support for asynchronous batch enqueue/dequeue of pointers from NPA pool. Signed-off-by: Ashwin Sekhar T K --- drivers/mempool/cnxk/cn10k_mempool_ops.c | 258 ++++++++++++++++++++++- drivers/mempool/cnxk/cnxk_mempool.c | 19 +- drivers/mempool/cnxk/cnxk_mempool.h | 3 +- drivers/mempool/cnxk/cnxk_mempool_ops.c | 28 +++ 4 files changed, 287 insertions(+), 21 deletions(-) diff --git a/drivers/mempool/cnxk/cn10k_mempool_ops.c b/drivers/mempool/cnxk/cn10k_mempool_ops.c index fc7592fd94..131abc0723 100644 --- a/drivers/mempool/cnxk/cn10k_mempool_ops.c +++ b/drivers/mempool/cnxk/cn10k_mempool_ops.c @@ -7,11 +7,239 @@ #include "roc_api.h" #include "cnxk_mempool.h" +#define BATCH_ALLOC_SZ ROC_CN10K_NPA_BATCH_ALLOC_MAX_PTRS + +enum batch_op_status { + BATCH_ALLOC_OP_NOT_ISSUED = 0, + BATCH_ALLOC_OP_ISSUED = 1, + BATCH_ALLOC_OP_DONE +}; + +struct batch_op_mem { + unsigned int sz; + enum batch_op_status status; + uint64_t objs[BATCH_ALLOC_SZ] __rte_aligned(ROC_ALIGN); +}; + +struct batch_op_data { + uint64_t lmt_addr; + struct batch_op_mem mem[RTE_MAX_LCORE] __rte_aligned(ROC_ALIGN); +}; + +static struct batch_op_data **batch_op_data; + +#define BATCH_OP_DATA_GET(pool_id) \ + batch_op_data[roc_npa_aura_handle_to_aura(pool_id)] + +#define BATCH_OP_DATA_SET(pool_id, op_data) \ + do { \ + uint64_t aura = roc_npa_aura_handle_to_aura(pool_id); \ + batch_op_data[aura] = op_data; \ + } while (0) + +int +cn10k_mempool_lf_init(void) +{ + unsigned int maxpools, sz; + + maxpools = roc_idev_npa_maxpools_get(); + sz = maxpools * sizeof(uintptr_t); + + batch_op_data = rte_zmalloc(NULL, sz, ROC_ALIGN); + if (!batch_op_data) + return -1; + + return 0; +} + +void +cn10k_mempool_lf_fini(void) +{ + if (!batch_op_data) + return; + + rte_free(batch_op_data); + batch_op_data = NULL; +} + +static int +batch_op_init(struct rte_mempool *mp) +{ + struct batch_op_data *op_data; + int i; + + RTE_ASSERT(BATCH_OP_DATA_GET(mp->pool_id) == NULL); + op_data = rte_zmalloc(NULL, sizeof(struct batch_op_data), ROC_ALIGN); + if (op_data == NULL) + return -1; + + for (i = 0; i < RTE_MAX_LCORE; i++) { + op_data->mem[i].sz = 0; + op_data->mem[i].status = BATCH_ALLOC_OP_NOT_ISSUED; + } + + op_data->lmt_addr = roc_idev_lmt_base_addr_get(); + BATCH_OP_DATA_SET(mp->pool_id, op_data); + + return 0; +} + +static void +batch_op_fini(struct rte_mempool *mp) +{ + struct batch_op_data *op_data; + int i; + + op_data = BATCH_OP_DATA_GET(mp->pool_id); + + rte_wmb(); + for (i = 0; i < RTE_MAX_LCORE; i++) { + struct batch_op_mem *mem = &op_data->mem[i]; + + if (mem->status == BATCH_ALLOC_OP_ISSUED) { + mem->sz = roc_npa_aura_batch_alloc_extract( + mem->objs, mem->objs, BATCH_ALLOC_SZ); + mem->status = BATCH_ALLOC_OP_DONE; + } + if (mem->status == BATCH_ALLOC_OP_DONE) { + roc_npa_aura_op_bulk_free(mp->pool_id, mem->objs, + mem->sz, 1); + mem->status = BATCH_ALLOC_OP_NOT_ISSUED; + } + } + + rte_free(op_data); + BATCH_OP_DATA_SET(mp->pool_id, NULL); +} + +static int __rte_hot +cn10k_mempool_enq(struct rte_mempool *mp, void *const *obj_table, + unsigned int n) +{ + const uint64_t *ptr = (const uint64_t *)obj_table; + uint64_t lmt_addr = 0, lmt_id = 0; + struct batch_op_data *op_data; + + /* Ensure mbuf init changes are written before the free pointers are + * are enqueued to the stack. + */ + rte_io_wmb(); + + if (n == 1) { + roc_npa_aura_op_free(mp->pool_id, 1, ptr[0]); + return 0; + } + + op_data = BATCH_OP_DATA_GET(mp->pool_id); + lmt_addr = op_data->lmt_addr; + ROC_LMT_BASE_ID_GET(lmt_addr, lmt_id); + roc_npa_aura_op_batch_free(mp->pool_id, ptr, n, 1, lmt_addr, lmt_id); + + return 0; +} + +static int __rte_hot +cn10k_mempool_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + struct batch_op_data *op_data; + struct batch_op_mem *mem; + unsigned int count = 0; + int tid, rc, retry; + bool loop = true; + + op_data = BATCH_OP_DATA_GET(mp->pool_id); + tid = rte_lcore_id(); + mem = &op_data->mem[tid]; + + /* Issue batch alloc */ + if (mem->status == BATCH_ALLOC_OP_NOT_ISSUED) { + rc = roc_npa_aura_batch_alloc_issue(mp->pool_id, mem->objs, + BATCH_ALLOC_SZ, 0, 1); + /* If issue fails, try falling back to default alloc */ + if (unlikely(rc)) + return cn10k_mempool_enq(mp, obj_table, n); + mem->status = BATCH_ALLOC_OP_ISSUED; + } + + retry = 4; + while (loop) { + unsigned int cur_sz; + + if (mem->status == BATCH_ALLOC_OP_ISSUED) { + mem->sz = roc_npa_aura_batch_alloc_extract( + mem->objs, mem->objs, BATCH_ALLOC_SZ); + + /* If partial alloc reduce the retry count */ + retry -= (mem->sz != BATCH_ALLOC_SZ); + /* Break the loop if retry count exhausted */ + loop = !!retry; + mem->status = BATCH_ALLOC_OP_DONE; + } + + cur_sz = n - count; + if (cur_sz > mem->sz) + cur_sz = mem->sz; + + /* Dequeue the pointers */ + memcpy(&obj_table[count], &mem->objs[mem->sz - cur_sz], + cur_sz * sizeof(uintptr_t)); + mem->sz -= cur_sz; + count += cur_sz; + + /* Break loop if the required pointers has been dequeued */ + loop &= (count != n); + + /* Issue next batch alloc if pointers are exhausted */ + if (mem->sz == 0) { + rc = roc_npa_aura_batch_alloc_issue( + mp->pool_id, mem->objs, BATCH_ALLOC_SZ, 0, 1); + /* Break loop if issue failed and set status */ + loop &= !rc; + mem->status = !rc; + } + } + + if (unlikely(count != n)) { + /* No partial alloc allowed. Free up allocated pointers */ + cn10k_mempool_enq(mp, obj_table, count); + return -ENOENT; + } + + return 0; +} + +static unsigned int +cn10k_mempool_get_count(const struct rte_mempool *mp) +{ + struct batch_op_data *op_data; + unsigned int count = 0; + int i; + + op_data = BATCH_OP_DATA_GET(mp->pool_id); + + rte_wmb(); + for (i = 0; i < RTE_MAX_LCORE; i++) { + struct batch_op_mem *mem = &op_data->mem[i]; + + if (mem->status == BATCH_ALLOC_OP_ISSUED) + count += roc_npa_aura_batch_alloc_count(mem->objs, + BATCH_ALLOC_SZ); + + if (mem->status == BATCH_ALLOC_OP_DONE) + count += mem->sz; + } + + count += cnxk_mempool_get_count(mp); + + return count; +} + static int cn10k_mempool_alloc(struct rte_mempool *mp) { uint32_t block_size; size_t padding; + int rc; block_size = mp->elt_size + mp->header_size + mp->trailer_size; /* Align header size to ROC_ALIGN */ @@ -29,16 +257,36 @@ cn10k_mempool_alloc(struct rte_mempool *mp) block_size += padding; } - return cnxk_mempool_alloc(mp); + rc = cnxk_mempool_alloc(mp); + if (rc) + return rc; + + rc = batch_op_init(mp); + if (rc) { + plt_err("Failed to init batch alloc mem rc=%d", rc); + goto error; + } + + return 0; +error: + cnxk_mempool_free(mp); + return rc; +} + +static void +cn10k_mempool_free(struct rte_mempool *mp) +{ + batch_op_fini(mp); + cnxk_mempool_free(mp); } static struct rte_mempool_ops cn10k_mempool_ops = { .name = "cn10k_mempool_ops", .alloc = cn10k_mempool_alloc, - .free = cnxk_mempool_free, - .enqueue = cnxk_mempool_enq, - .dequeue = cnxk_mempool_deq, - .get_count = cnxk_mempool_get_count, + .free = cn10k_mempool_free, + .enqueue = cn10k_mempool_enq, + .dequeue = cn10k_mempool_deq, + .get_count = cn10k_mempool_get_count, .calc_mem_size = cnxk_mempool_calc_mem_size, .populate = cnxk_mempool_populate, }; diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c index c24497a6e5..1bbe384fe7 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.c +++ b/drivers/mempool/cnxk/cnxk_mempool.c @@ -14,14 +14,11 @@ #include #include "roc_api.h" -#include "cnxk_mempool.h" #define CNXK_NPA_DEV_NAME RTE_STR(cnxk_npa_dev_) #define CNXK_NPA_DEV_NAME_LEN (sizeof(CNXK_NPA_DEV_NAME) + PCI_PRI_STR_SIZE) #define CNXK_NPA_MAX_POOLS_PARAM "max_pools" -uintptr_t *cnxk_mempool_internal_data; - static inline uint32_t npa_aura_size_to_u32(uint8_t val) { @@ -82,33 +79,25 @@ static int npa_init(struct rte_pci_device *pci_dev) { char name[CNXK_NPA_DEV_NAME_LEN]; - size_t idata_offset, idata_sz; const struct rte_memzone *mz; struct roc_npa *dev; - int rc, maxpools; + int rc; rc = plt_init(); if (rc < 0) goto error; - maxpools = parse_aura_size(pci_dev->device.devargs); - /* Add the space for per-pool internal data pointers to memzone len */ - idata_offset = RTE_ALIGN_CEIL(sizeof(*dev), ROC_ALIGN); - idata_sz = maxpools * sizeof(uintptr_t); - rc = -ENOMEM; mz = rte_memzone_reserve_aligned(npa_dev_to_name(pci_dev, name), - idata_offset + idata_sz, SOCKET_ID_ANY, - 0, RTE_CACHE_LINE_SIZE); + sizeof(*dev), SOCKET_ID_ANY, 0, + RTE_CACHE_LINE_SIZE); if (mz == NULL) goto error; dev = mz->addr; dev->pci_dev = pci_dev; - cnxk_mempool_internal_data = (uintptr_t *)(mz->addr_64 + idata_offset); - memset(cnxk_mempool_internal_data, 0, idata_sz); - roc_idev_npa_maxpools_set(maxpools); + roc_idev_npa_maxpools_set(parse_aura_size(pci_dev->device.devargs)); rc = roc_npa_dev_init(dev); if (rc) goto mz_free; diff --git a/drivers/mempool/cnxk/cnxk_mempool.h b/drivers/mempool/cnxk/cnxk_mempool.h index 8f226f861c..6e54346e6a 100644 --- a/drivers/mempool/cnxk/cnxk_mempool.h +++ b/drivers/mempool/cnxk/cnxk_mempool.h @@ -23,6 +23,7 @@ int __rte_hot cnxk_mempool_enq(struct rte_mempool *mp, void *const *obj_table, int __rte_hot cnxk_mempool_deq(struct rte_mempool *mp, void **obj_table, unsigned int n); -extern uintptr_t *cnxk_mempool_internal_data; +int cn10k_mempool_lf_init(void); +void cn10k_mempool_lf_fini(void); #endif diff --git a/drivers/mempool/cnxk/cnxk_mempool_ops.c b/drivers/mempool/cnxk/cnxk_mempool_ops.c index 29a4c12208..18f125c7ac 100644 --- a/drivers/mempool/cnxk/cnxk_mempool_ops.c +++ b/drivers/mempool/cnxk/cnxk_mempool_ops.c @@ -2,6 +2,7 @@ * Copyright(C) 2021 Marvell. */ +#include #include #include "roc_api.h" @@ -171,3 +172,30 @@ cnxk_mempool_populate(struct rte_mempool *mp, unsigned int max_objs, mp, RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ, max_objs, vaddr, iova, len, obj_cb, obj_cb_arg); } + +static int +cnxk_mempool_lf_init(void) +{ + int rc = 0; + + if (roc_model_is_cn10k()) { + rte_mbuf_set_platform_mempool_ops("cn10k_mempool_ops"); + rc = cn10k_mempool_lf_init(); + } else { + rte_mbuf_set_platform_mempool_ops("cn9k_mempool_ops"); + } + return rc; +} + +static void +cnxk_mempool_lf_fini(void) +{ + if (roc_model_is_cn10k()) + cn10k_mempool_lf_fini(); +} + +RTE_INIT(cnxk_mempool_ops_init) +{ + roc_npa_lf_init_cb_register(cnxk_mempool_lf_init); + roc_npa_lf_fini_cb_register(cnxk_mempool_lf_fini); +} From patchwork Fri Mar 5 16:21:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 88589 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AFDAA054F; Fri, 5 Mar 2021 19:14:07 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 793D122A404; Fri, 5 Mar 2021 19:13:32 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id ED28422A33F for ; Fri, 5 Mar 2021 17:22:16 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 125GFwHl020271 for ; Fri, 5 Mar 2021 08:22:16 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=zIRM0xRHBV4Zqx33R2jttWK9MLC67mTaNNGTW2GDk+g=; b=K+I2V7iMU4jpXtokMoK2UAlng1fPC0HfFIrH9FdcyjzzOXErDepndYi3MiaWTMLHnNnd v/my0npMMnrxqHOI3Q7rWJldn9TpYHjYDLxsOSThEZMceypKfpUI7SpSIAz6s+uYCHKi Wox2dv4qLe4q+NPMce3mr7FLS7eg+qvDeY9HkyaV6M9KfBJ+VKWNtQbG8eX7MfacmY/X lwphAWeQ0Y2df+tfQBLcGiiT326FyIXoOrsrBYKYFNLRvsWc5RoJ4Y1Jz+XjHqqqRNJo YicKh3rpaDDRSGP88X1JLvDR8nQUbd2ALZ2HsLzLaKpRgvMEiQxqYJihxZpXzqonTcUh Ug== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 370p7p0typ-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 05 Mar 2021 08:22:16 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:14 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 5 Mar 2021 08:22:13 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 5 Mar 2021 08:22:13 -0800 Received: from lab-ci-142.marvell.com (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1B8643F703F; Fri, 5 Mar 2021 08:22:10 -0800 (PST) From: Ashwin Sekhar T K To: CC: , , , , , , , Nithin Dabilpuram Date: Fri, 5 Mar 2021 21:51:49 +0530 Message-ID: <20210305162149.2196166-7-asekhar@marvell.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210305162149.2196166-1-asekhar@marvell.com> References: <20210305162149.2196166-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761 definitions=2021-03-05_10:2021-03-03, 2021-03-05 signatures=0 X-Mailman-Approved-At: Fri, 05 Mar 2021 19:13:23 +0100 Subject: [dpdk-dev] [PATCH 6/6] doc: add Marvell CNXK mempool documentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Marvell OCTEON CNXK mempool documentation. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Ashwin Sekhar T K --- MAINTAINERS | 6 +++ doc/guides/mempool/cnxk.rst | 84 ++++++++++++++++++++++++++++++++++++ doc/guides/mempool/index.rst | 1 + doc/guides/platform/cnxk.rst | 3 ++ 4 files changed, 94 insertions(+) create mode 100644 doc/guides/mempool/cnxk.rst diff --git a/MAINTAINERS b/MAINTAINERS index 45dcd36dbe..67c179f11b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -501,6 +501,12 @@ M: Artem V. Andreev M: Andrew Rybchenko F: drivers/mempool/bucket/ +Marvell cnxk +M: Ashwin Sekhar T K +M: Pavan Nikhilesh +F: drivers/mempool/cnxk/ +F: doc/guides/mempool/cnxk.rst + Marvell OCTEON TX2 M: Jerin Jacob M: Nithin Dabilpuram diff --git a/doc/guides/mempool/cnxk.rst b/doc/guides/mempool/cnxk.rst new file mode 100644 index 0000000000..fe099bb11a --- /dev/null +++ b/doc/guides/mempool/cnxk.rst @@ -0,0 +1,84 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2021 Marvell. + +CNXK NPA Mempool Driver +============================ + +The CNXK NPA PMD (**librte_mempool_cnxk**) provides mempool +driver support for the integrated mempool device found in **Marvell OCTEON CN9K/CN10K** SoC family. + +More information about CNXK SoC can be found at `Marvell Official Website +`_. + +Features +-------- + +CNXK NPA PMD supports: + +- Up to 128 NPA LFs +- 1M Pools per LF +- HW mempool manager +- Asynchronous batch alloc of up to 512 buffer allocations with single instruction. +- Batch free of up to 15 buffers with single instruction. +- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path. +- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path. + +Prerequisites and Compilation procedure +--------------------------------------- + + See :doc:`../platform/cnxk` for setup information. + +Pre-Installation Configuration +------------------------------ + + +Runtime Config Options +~~~~~~~~~~~~~~~~~~~~~~ + +- ``Maximum number of mempools per application`` (default ``128``) + + The maximum number of mempools per application needs to be configured on + HW during mempool driver initialization. HW can support up to 1M mempools, + Since each mempool costs set of HW resources, the ``max_pools`` ``devargs`` + parameter is being introduced to configure the number of mempools required + for the application. + For example:: + + -a 0002:02:00.0,max_pools=512 + + With the above configuration, the driver will set up only 512 mempools for + the given application to save HW resources. + +.. note:: + + Since this configuration is per application, the end user needs to + provide ``max_pools`` parameter to the first PCIe device probed by the given + application. + +Debugging Options +~~~~~~~~~~~~~~~~~ + +.. _table_cnxk_mempool_debug_options: + +.. table:: CNXK mempool debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | NPA | --log-level='pmd\.mempool.cnxk,8' | + +---+------------+-------------------------------------------------------+ + +Standalone mempool device +~~~~~~~~~~~~~~~~~~~~~~~~~ + + The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices + available in the system. In order to avoid, the end user to bind the mempool + device prior to use ethdev and/or eventdev device, the respective driver + configures an NPA LF and attach to the first probed ethdev or eventdev device. + In case, if end user need to run mempool as a standalone device + (without ethdev or eventdev), end user needs to bind a mempool device using + ``usertools/dpdk-devbind.py`` + + Example command to run ``mempool_autotest`` test with standalone CN10K NPA device:: + + echo "mempool_autotest" | /app/test/dpdk-test -c 0xf0 --mbuf-pool-ops-name="cn10k_mempool_ops" diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst index a0e55467e6..ce53bc1ac7 100644 --- a/doc/guides/mempool/index.rst +++ b/doc/guides/mempool/index.rst @@ -11,6 +11,7 @@ application through the mempool API. :maxdepth: 2 :numbered: + cnxk octeontx octeontx2 ring diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst index 3b072877a1..9bbba65f2e 100644 --- a/doc/guides/platform/cnxk.rst +++ b/doc/guides/platform/cnxk.rst @@ -141,6 +141,9 @@ HW Offload Drivers This section lists dataplane H/W block(s) available in CNXK SoC. +#. **Mempool Driver** + See :doc:`../mempool/cnxk` for NPA mempool driver information. + Procedure to Setup Platform ---------------------------