From patchwork Mon Oct 21 08:03:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 61559 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 41C455B32; Mon, 21 Oct 2019 10:03:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 2A3654C74 for ; Mon, 21 Oct 2019 10:03:56 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x9L80m4u024517; Mon, 21 Oct 2019 01:03:53 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=xy6CJp8nTOT+WFMosAOqp/9+dPTJp0NVdbVRJ59Gxtc=; b=MOKSb0mEX6ifAC5E1Oz7vyTjzZWjIMLHabtwjS/mL6NMUfq4flbZYN6C8JO3oCYKs1Ka SUguKd7MVBIPwcGB2UrjrEUsE7jCfBL5lF9K3Rs1stW+Yc2hgev2EA1P5msAxM4r5IUB lfJUlMCJFohsvgy5TWZPZqzb03YlpRiWsOAAYhecyrDxNzdcHxnppsEieSkgX6Kefl9s RkTm/ixJBUWkObKW12Ik6Q7LmWvsqcghMDWEQOJZJCNFsxjIGOd+248N4RH7D5lr+KUL T1SCQW4Ed5wZexwz34PGgnPuiUbEO2N637UyEht/tsSeSfroIYqAYJKZVdq332Nuc6Xx Yg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2vqyuqdutq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 21 Oct 2019 01:03:53 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 21 Oct 2019 01:03:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 21 Oct 2019 01:03:52 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 12BD33F7041; Mon, 21 Oct 2019 01:03:48 -0700 (PDT) From: To: CC: , , , , , , , , Vamsi Attunuru Date: Mon, 21 Oct 2019 13:33:21 +0530 Message-ID: <20191021080324.10659-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20191021080324.10659-1-vattunuru@marvell.com> References: <20190816061252.17214-1-vattunuru@marvell.com> <20191021080324.10659-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,1.0.8 definitions=2019-10-21_02:2019-10-18,2019-10-21 signatures=0 Subject: [dpdk-dev] [PATCH v11 1/4] mempool: populate mempool with the page sized chunks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds a routine to populate mempool from page aligned and page sized chunks of memory to ensure memory objs do not fall across the page boundaries. It's useful for applications that require physically contiguous mbuf memory while running in IOVA=VA mode. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Acked-by: Olivier Matz --- lib/librte_mempool/rte_mempool.c | 69 ++++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool.h | 20 +++++++++ lib/librte_mempool/rte_mempool_version.map | 2 + 3 files changed, 91 insertions(+) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 0f29e87..ef1298d 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,75 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Function to populate mempool from page sized mem chunks, allocate page size + * of memory in memzone and populate them. Return the number of objects added, + * or a negative value on error. + */ +int +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + size_t align, pg_sz, pg_shift; + const struct rte_memzone *mz; + unsigned int mz_id, n; + size_t min_chunk_size; + int ret; + + ret = mempool_ops_alloc_once(mp); + if (ret != 0) + return ret; + + if (mp->nb_mem_chunks != 0) + return -EEXIST; + + pg_sz = get_min_page_size(mp->socket_id); + pg_shift = rte_bsf32(pg_sz); + + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { + + ret = rte_mempool_ops_calc_mem_size(mp, n, + pg_shift, &min_chunk_size, &align); + + if (ret < 0) + goto fail; + + if (min_chunk_size > pg_sz) { + ret = -EINVAL; + goto fail; + } + + ret = snprintf(mz_name, sizeof(mz_name), + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); + if (ret < 0 || ret >= (int)sizeof(mz_name)) { + ret = -ENAMETOOLONG; + goto fail; + } + + mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size, + mp->socket_id, 0, align); + + if (mz == NULL) { + ret = -rte_errno; + goto fail; + } + + ret = rte_mempool_populate_iova(mp, mz->addr, + mz->iova, mz->len, + rte_mempool_memchunk_mz_free, + (void *)(uintptr_t)mz); + if (ret < 0) { + rte_memzone_free(mz); + goto fail; + } + } + + return mp->size; + +fail: + rte_mempool_free_memchunks(mp); + return ret; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..2f5126e 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1062,6 +1062,26 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, void *opaque); /** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add memory from page sized memzones for objects in the pool at init + * + * This is the function used to populate the mempool with page aligned and + * page sized memzone memory to avoid spreading object memory across two pages + * and to ensure all mempool objects reside on the page memory. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +__rte_experimental +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); + +/** * Add memory for objects in the pool at init * * This is the default function used by rte_mempool_create() to populate diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca4..d6fe5a5 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -57,4 +57,6 @@ EXPERIMENTAL { global: rte_mempool_ops_get_info; + # added in 19.11 + rte_mempool_populate_from_pg_sz_chunks; };