Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/69764/?format=api
http://patches.dpdk.org/api/patches/69764/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/b0ef92d3be578c1dbcc6fd61a12dd943decaa15c.1588688636.git.anatoly.burakov@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<b0ef92d3be578c1dbcc6fd61a12dd943decaa15c.1588688636.git.anatoly.burakov@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/b0ef92d3be578c1dbcc6fd61a12dd943decaa15c.1588688636.git.anatoly.burakov@intel.com", "date": "2020-05-05T14:24:07", "name": "[RFC] mem: add atomic lookup-and-reserve/free memzone API", "commit_ref": null, "pull_url": null, "state": "rejected", "archived": true, "hash": "8ccb9663fc8d1a4e4d1ccf04d2d0f7f0d927fca3", "submitter": { "id": 4, "url": "http://patches.dpdk.org/api/people/4/?format=api", "name": "Anatoly Burakov", "email": "anatoly.burakov@intel.com" }, "delegate": { "id": 24651, "url": "http://patches.dpdk.org/api/users/24651/?format=api", "username": "dmarchand", "first_name": "David", "last_name": "Marchand", "email": "david.marchand@redhat.com" }, "mbox": "http://patches.dpdk.org/project/dpdk/patch/b0ef92d3be578c1dbcc6fd61a12dd943decaa15c.1588688636.git.anatoly.burakov@intel.com/mbox/", "series": [ { "id": 9836, "url": "http://patches.dpdk.org/api/series/9836/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=9836", "date": "2020-05-05T14:24:07", "name": "[RFC] mem: add atomic lookup-and-reserve/free memzone API", "version": 1, "mbox": "http://patches.dpdk.org/series/9836/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/69764/comments/", "check": "success", "checks": "http://patches.dpdk.org/api/patches/69764/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id C9371A04B8;\n\tTue, 5 May 2020 16:24:16 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 4DE721D5C1;\n\tTue, 5 May 2020 16:24:16 +0200 (CEST)", "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n by dpdk.org (Postfix) with ESMTP id AF1271D5BF\n for <dev@dpdk.org>; Tue, 5 May 2020 16:24:13 +0200 (CEST)", "from orsmga006.jf.intel.com ([10.7.209.51])\n by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 05 May 2020 07:24:09 -0700", "from aburakov-mobl.ger.corp.intel.com (HELO ubuntu-vm.mshome.net)\n ([10.213.197.31])\n by orsmga006.jf.intel.com with ESMTP; 05 May 2020 07:24:07 -0700" ], "IronPort-SDR": [ "\n oj/0p7PYn8/kqmNVG3BaGTSaQNQn5ve4yLXMw3/rE+jlfZRsx07pDBJ2vFitXI6KOEU3M2ffQh\n QOXKYu+5FbLg==", "\n RBLZ/DG6X2nRKsr13QkjmTG9xqK8vNKbXF+EXwnvarOQmN4DYV2qrIUXqvrhu3mINq9O31Ezs2\n 9DT8qeBkkLww==" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.73,355,1583222400\"; d=\"scan'208\";a=\"263184058\"", "From": "Anatoly Burakov <anatoly.burakov@intel.com>", "To": "dev@dpdk.org", "Date": "Tue, 5 May 2020 14:24:07 +0000", "Message-Id": "\n <b0ef92d3be578c1dbcc6fd61a12dd943decaa15c.1588688636.git.anatoly.burakov@intel.com>", "X-Mailer": "git-send-email 2.17.1", "Subject": "[dpdk-dev] [RFC] mem: add atomic lookup-and-reserve/free memzone API", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Currently, in order to perform a memzone lookup and create/free\nthe memzone, the user has to call two API's, creating a race\ncondition. This is particularly destructive for memzone_free call\nbecause the reference provided to memzone_free at the time of call\nmay be stale or refer to a different memzone altogether.\n\nFix this race condition by adding an API to perform lookup and\ncreate/free memzone in one go.\n\nSigned-off-by: Anatoly Burakov <anatoly.burakov@intel.com>\n---\n lib/librte_eal/common/eal_common_memzone.c | 125 ++++++++---\n lib/librte_eal/include/rte_memzone.h | 235 +++++++++++++++++++++\n lib/librte_eal/rte_eal_version.map | 4 +\n 3 files changed, 340 insertions(+), 24 deletions(-)", "diff": "diff --git a/lib/librte_eal/common/eal_common_memzone.c b/lib/librte_eal/common/eal_common_memzone.c\nindex 7c21aa921e..38dc995a39 100644\n--- a/lib/librte_eal/common/eal_common_memzone.c\n+++ b/lib/librte_eal/common/eal_common_memzone.c\n@@ -189,7 +189,8 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,\n \n static const struct rte_memzone *\n rte_memzone_reserve_thread_safe(const char *name, size_t len, int socket_id,\n-\t\tunsigned int flags, unsigned int align, unsigned int bound)\n+\t\tunsigned int flags, unsigned int align, unsigned int bound,\n+\t\tbool lookup)\n {\n \tstruct rte_mem_config *mcfg;\n \tconst struct rte_memzone *mz = NULL;\n@@ -199,11 +200,17 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len, int socket_id,\n \n \trte_rwlock_write_lock(&mcfg->mlock);\n \n-\tmz = memzone_reserve_aligned_thread_unsafe(\n-\t\tname, len, socket_id, flags, align, bound);\n+\tif (lookup) {\n+\t\tmz = memzone_lookup_thread_unsafe(name);\n+\t\trte_eal_trace_memzone_lookup(name, mz);\n+\t}\n+\tif (mz == NULL) {\n+\t\tmz = memzone_reserve_aligned_thread_unsafe(\n+\t\t\tname, len, socket_id, flags, align, bound);\n \n-\trte_eal_trace_memzone_reserve(name, len, socket_id, flags, align,\n-\t\tbound, mz);\n+\t\trte_eal_trace_memzone_reserve(name, len, socket_id, flags,\n+\t\t\t\talign, bound, mz);\n+\t}\n \n \trte_rwlock_write_unlock(&mcfg->mlock);\n \n@@ -220,7 +227,7 @@ rte_memzone_reserve_bounded(const char *name, size_t len, int socket_id,\n \t\t\t unsigned flags, unsigned align, unsigned bound)\n {\n \treturn rte_memzone_reserve_thread_safe(name, len, socket_id, flags,\n-\t\t\t\t\t align, bound);\n+\t\t\talign, bound, false);\n }\n \n /*\n@@ -232,7 +239,7 @@ rte_memzone_reserve_aligned(const char *name, size_t len, int socket_id,\n \t\t\t unsigned flags, unsigned align)\n {\n \treturn rte_memzone_reserve_thread_safe(name, len, socket_id, flags,\n-\t\t\t\t\t align, 0);\n+\t\t\talign, 0, false);\n }\n \n /*\n@@ -244,49 +251,119 @@ rte_memzone_reserve(const char *name, size_t len, int socket_id,\n \t\t unsigned flags)\n {\n \treturn rte_memzone_reserve_thread_safe(name, len, socket_id,\n-\t\t\t\t\t flags, RTE_CACHE_LINE_SIZE, 0);\n+\t\t\tflags, RTE_CACHE_LINE_SIZE, 0, false);\n }\n \n-int\n-rte_memzone_free(const struct rte_memzone *mz)\n+/*\n+ * Return a pointer to a correctly filled memzone descriptor (with a\n+ * specified alignment and boundary). If the allocation cannot be done,\n+ * return NULL.\n+ */\n+const struct rte_memzone *\n+rte_memzone_lookup_reserve_bounded(const char *name, size_t len, int socket_id,\n+\t\tunsigned int flags, unsigned int align, unsigned int bound)\n {\n-\tchar name[RTE_MEMZONE_NAMESIZE];\n+\treturn rte_memzone_reserve_thread_safe(name, len, socket_id,\n+\t\t\tflags, align, bound, true);\n+}\n+\n+/*\n+ * Return a pointer to a correctly filled memzone descriptor (with a\n+ * specified alignment). If the allocation cannot be done, return NULL.\n+ */\n+const struct rte_memzone *\n+rte_memzone_lookup_reserve_aligned(const char *name, size_t len, int socket_id,\n+\t\tunsigned int flags, unsigned int align)\n+{\n+\treturn rte_memzone_reserve_thread_safe(name, len, socket_id,\n+\t\t\tflags, align, 0, true);\n+}\n+\n+/*\n+ * Return existing memzone, or create a new one if it doesn't exist.\n+ */\n+const struct rte_memzone *\n+rte_memzone_lookup_reserve(const char *name, size_t len, int socket_id,\n+\t\tunsigned int flags)\n+{\n+\treturn rte_memzone_reserve_thread_safe(name, len, socket_id,\n+\t\t\tflags, RTE_CACHE_LINE_SIZE, 0, true);\n+}\n+\n+static int\n+rte_memzone_free_thread_unsafe(const struct rte_memzone *mz)\n+{\n+\tstruct rte_memzone *found_mz;\n \tstruct rte_mem_config *mcfg;\n+\tchar name[RTE_MEMZONE_NAMESIZE];\n \tstruct rte_fbarray *arr;\n-\tstruct rte_memzone *found_mz;\n+\tvoid *addr;\n+\tunsigned int idx;\n \tint ret = 0;\n-\tvoid *addr = NULL;\n-\tunsigned idx;\n-\n-\tif (mz == NULL)\n-\t\treturn -EINVAL;\n \n-\trte_strlcpy(name, mz->name, RTE_MEMZONE_NAMESIZE);\n \tmcfg = rte_eal_get_configuration()->mem_config;\n \tarr = &mcfg->memzones;\n-\n-\trte_rwlock_write_lock(&mcfg->mlock);\n-\n \tidx = rte_fbarray_find_idx(arr, mz);\n \tfound_mz = rte_fbarray_get(arr, idx);\n+\trte_strlcpy(name, mz->name, RTE_MEMZONE_NAMESIZE);\n \n \tif (found_mz == NULL) {\n+\t\taddr = NULL;\n \t\tret = -EINVAL;\n \t} else if (found_mz->addr == NULL) {\n \t\tRTE_LOG(ERR, EAL, \"Memzone is not allocated\\n\");\n+\t\taddr = NULL;\n \t\tret = -EINVAL;\n \t} else {\n \t\taddr = found_mz->addr;\n+\n \t\tmemset(found_mz, 0, sizeof(*found_mz));\n \t\trte_fbarray_set_free(arr, idx);\n+\n+\t\trte_free(addr);\n \t}\n+\trte_eal_trace_memzone_free(name, addr, ret);\n+\n+\treturn ret;\n+}\n+\n+int\n+rte_memzone_free(const struct rte_memzone *mz)\n+{\n+\tstruct rte_mem_config *mcfg;\n+\tint ret;\n+\n+\tif (mz == NULL)\n+\t\treturn -EINVAL;\n \n+\tmcfg = rte_eal_get_configuration()->mem_config;\n+\n+\trte_rwlock_write_lock(&mcfg->mlock);\n+\tret = rte_memzone_free_thread_unsafe(mz);\n \trte_rwlock_write_unlock(&mcfg->mlock);\n \n-\tif (addr != NULL)\n-\t\trte_free(addr);\n+\treturn ret;\n+}\n \n-\trte_eal_trace_memzone_free(name, addr, ret);\n+int\n+rte_memzone_lookup_free(const char *name)\n+{\n+\tconst struct rte_memzone *memzone;\n+\tstruct rte_mem_config *mcfg;\n+\tint ret;\n+\n+\tmcfg = rte_eal_get_configuration()->mem_config;\n+\n+\trte_rwlock_read_lock(&mcfg->mlock);\n+\n+\tmemzone = memzone_lookup_thread_unsafe(name);\n+\trte_eal_trace_memzone_lookup(name, memzone);\n+\tif (memzone != NULL)\n+\t\tret = rte_memzone_free_thread_unsafe(memzone);\n+\telse\n+\t\tret = -ENOENT;\n+\n+\trte_rwlock_read_unlock(&mcfg->mlock);\n \treturn ret;\n }\n \ndiff --git a/lib/librte_eal/include/rte_memzone.h b/lib/librte_eal/include/rte_memzone.h\nindex 091c9522f7..824dae7df9 100644\n--- a/lib/librte_eal/include/rte_memzone.h\n+++ b/lib/librte_eal/include/rte_memzone.h\n@@ -270,6 +270,224 @@ const struct rte_memzone *rte_memzone_reserve_bounded(const char *name,\n \t\t\tsize_t len, int socket_id,\n \t\t\tunsigned flags, unsigned align, unsigned bound);\n \n+/**\n+ * Reserve a portion of physical memory if it doesn't exist.\n+ *\n+ * This function reserves some memory and returns a pointer to a\n+ * correctly filled memzone descriptor. If the memory already exists, return\n+ * a pointer to pre-existing memzone descriptor. If the allocation cannot be\n+ * done, return NULL.\n+ *\n+ * @note If memzone with a given name already exists, it will be returned\n+ * regardless of whether it matches the requirements specified for allocation.\n+ * It is the responsibility of the user to ensure that two different memzones\n+ * with identical names are not attempted to be created.\n+ *\n+ * @note Reserving memzones with len set to 0 will only attempt to allocate\n+ * memzones from memory that is already available. It will not trigger any\n+ * new allocations.\n+ *\n+ * @note: When reserving memzones with len set to 0, it is preferable to also\n+ * set a valid socket_id. Setting socket_id to SOCKET_ID_ANY is supported, but\n+ * will likely not yield expected results. Specifically, the resulting memzone\n+ * may not necessarily be the biggest memzone available, but rather biggest\n+ * memzone available on socket id corresponding to an lcore from which\n+ * reservation was called.\n+ *\n+ * @param name\n+ * The name of the memzone. If the memzone with this name already exists, the\n+ * function will return existing memzone instead of allocating a new one.\n+ * @param len\n+ * The size of the memory to be reserved. If it\n+ * is 0, the biggest contiguous zone will be reserved.\n+ * @param socket_id\n+ * The socket identifier in the case of\n+ * NUMA. The value can be SOCKET_ID_ANY if there is no NUMA\n+ * constraint for the reserved zone.\n+ * @param flags\n+ * The flags parameter is used to request memzones to be\n+ * taken from specifically sized hugepages.\n+ * - RTE_MEMZONE_2MB - Reserved from 2MB pages\n+ * - RTE_MEMZONE_1GB - Reserved from 1GB pages\n+ * - RTE_MEMZONE_16MB - Reserved from 16MB pages\n+ * - RTE_MEMZONE_16GB - Reserved from 16GB pages\n+ * - RTE_MEMZONE_256KB - Reserved from 256KB pages\n+ * - RTE_MEMZONE_256MB - Reserved from 256MB pages\n+ * - RTE_MEMZONE_512MB - Reserved from 512MB pages\n+ * - RTE_MEMZONE_4GB - Reserved from 4GB pages\n+ * - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if\n+ * the requested page size is unavailable.\n+ * If this flag is not set, the function\n+ * will return error on an unavailable size\n+ * request.\n+ * - RTE_MEMZONE_IOVA_CONTIG - Ensure reserved memzone is IOVA-contiguous.\n+ * This option should be used when allocating\n+ * memory intended for hardware rings etc.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental\n+const struct rte_memzone *rte_memzone_lookup_reserve(const char *name,\n+\t\tsize_t len, int socket_id, unsigned int flags);\n+\n+/**\n+ * Reserve a portion of physical memory if it doesn't exist, with alignment on\n+ * a specified boundary.\n+ *\n+ * This function reserves some memory with alignment on a specified\n+ * boundary, and returns a pointer to a correctly filled memzone\n+ * descriptor. If memory already exists, return pointer to pre-existing\n+ * memzone descriptor. If the allocation cannot be done or if the alignment\n+ * is not a power of 2, returns NULL.\n+ *\n+ * @note If memzone with a given name already exists, it will be returned\n+ * regardless of whether it matches the requirements specified for allocation.\n+ * It is the responsibility of the user to ensure that two different memzones\n+ * with identical names are not attempted to be created.\n+ *\n+ * @note Reserving memzones with len set to 0 will only attempt to allocate\n+ * memzones from memory that is already available. It will not trigger any\n+ * new allocations.\n+ *\n+ * @note: When reserving memzones with len set to 0, it is preferable to also\n+ * set a valid socket_id. Setting socket_id to SOCKET_ID_ANY is supported, but\n+ * will likely not yield expected results. Specifically, the resulting memzone\n+ * may not necessarily be the biggest memzone available, but rather biggest\n+ * memzone available on socket id corresponding to an lcore from which\n+ * reservation was called.\n+ *\n+ * @param name\n+ * The name of the memzone. If the memzone with this name already exists, the\n+ * function will return existing memzone instead of allocating a new one.\n+ * @param len\n+ * The size of the memory to be reserved. If it\n+ * is 0, the biggest contiguous zone will be reserved.\n+ * @param socket_id\n+ * The socket identifier in the case of\n+ * NUMA. The value can be SOCKET_ID_ANY if there is no NUMA\n+ * constraint for the reserved zone.\n+ * @param flags\n+ * The flags parameter is used to request memzones to be\n+ * taken from specifically sized hugepages.\n+ * - RTE_MEMZONE_2MB - Reserved from 2MB pages\n+ * - RTE_MEMZONE_1GB - Reserved from 1GB pages\n+ * - RTE_MEMZONE_16MB - Reserved from 16MB pages\n+ * - RTE_MEMZONE_16GB - Reserved from 16GB pages\n+ * - RTE_MEMZONE_256KB - Reserved from 256KB pages\n+ * - RTE_MEMZONE_256MB - Reserved from 256MB pages\n+ * - RTE_MEMZONE_512MB - Reserved from 512MB pages\n+ * - RTE_MEMZONE_4GB - Reserved from 4GB pages\n+ * - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if\n+ * the requested page size is unavailable.\n+ * If this flag is not set, the function\n+ * will return error on an unavailable size\n+ * request.\n+ * - RTE_MEMZONE_IOVA_CONTIG - Ensure reserved memzone is IOVA-contiguous.\n+ * This option should be used when allocating\n+ * memory intended for hardware rings etc.\n+ * @param align\n+ * Alignment for resulting memzone. Must be a power of 2.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental\n+const struct rte_memzone *rte_memzone_lookup_reserve_aligned(const char *name,\n+\t\tsize_t len, int socket_id, unsigned int flags,\n+\t\tunsigned int align);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * Reserve a portion of physical memory if it doesn't exist, with specified\n+ * alignment and boundary.\n+ *\n+ * This function reserves some memory with specified alignment and\n+ * boundary, and returns a pointer to a correctly filled memzone\n+ * descriptor. If memory already exists, return pointer to pre-existing\n+ * memzone descriptor. If the allocation cannot be done or if the alignment\n+ * or boundary are not a power of 2, returns NULL.\n+ * Memory buffer is reserved in a way, that it wouldn't cross specified\n+ * boundary. That implies that requested length should be less or equal\n+ * then boundary.\n+ *\n+ * @note If memzone with a given name already exists, it will be returned\n+ * regardless of whether it matches the requirements specified for allocation.\n+ * It is the responsibility of the user to ensure that two different memzones\n+ * with identical names are not attempted to be created.\n+ *\n+ * @note Reserving memzones with len set to 0 will only attempt to allocate\n+ * memzones from memory that is already available. It will not trigger any\n+ * new allocations.\n+ *\n+ * @note: When reserving memzones with len set to 0, it is preferable to also\n+ * set a valid socket_id. Setting socket_id to SOCKET_ID_ANY is supported, but\n+ * will likely not yield expected results. Specifically, the resulting memzone\n+ * may not necessarily be the biggest memzone available, but rather biggest\n+ * memzone available on socket id corresponding to an lcore from which\n+ * reservation was called.\n+ *\n+ * @param name\n+ * The name of the memzone. If the memzone with this name already exists, the\n+ * function will return existing memzone instead of allocating a new one.\n+ * @param len\n+ * The size of the memory to be reserved. If it\n+ * is 0, the biggest contiguous zone will be reserved.\n+ * @param socket_id\n+ * The socket identifier in the case of\n+ * NUMA. The value can be SOCKET_ID_ANY if there is no NUMA\n+ * constraint for the reserved zone.\n+ * @param flags\n+ * The flags parameter is used to request memzones to be\n+ * taken from specifically sized hugepages.\n+ * - RTE_MEMZONE_2MB - Reserved from 2MB pages\n+ * - RTE_MEMZONE_1GB - Reserved from 1GB pages\n+ * - RTE_MEMZONE_16MB - Reserved from 16MB pages\n+ * - RTE_MEMZONE_16GB - Reserved from 16GB pages\n+ * - RTE_MEMZONE_256KB - Reserved from 256KB pages\n+ * - RTE_MEMZONE_256MB - Reserved from 256MB pages\n+ * - RTE_MEMZONE_512MB - Reserved from 512MB pages\n+ * - RTE_MEMZONE_4GB - Reserved from 4GB pages\n+ * - RTE_MEMZONE_SIZE_HINT_ONLY - Allow alternative page size to be used if\n+ * the requested page size is unavailable.\n+ * If this flag is not set, the function\n+ * will return error on an unavailable size\n+ * request.\n+ * - RTE_MEMZONE_IOVA_CONTIG - Ensure reserved memzone is IOVA-contiguous.\n+ * This option should be used when allocating\n+ * memory intended for hardware rings etc.\n+ * @param align\n+ * Alignment for resulting memzone. Must be a power of 2.\n+ * @param bound\n+ * Boundary for resulting memzone. Must be a power of 2 or zero.\n+ * Zero value implies no boundary condition.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental\n+const struct rte_memzone *rte_memzone_lookup_reserve_bounded(const char *name,\n+\t\t\tsize_t len, int socket_id,\n+\t\t\tunsigned int flags, unsigned int align,\n+\t\t\tunsigned int bound);\n+\n /**\n * Free a memzone.\n *\n@@ -281,6 +499,23 @@ const struct rte_memzone *rte_memzone_reserve_bounded(const char *name,\n */\n int rte_memzone_free(const struct rte_memzone *mz);\n \n+/**\n+ * Lookup and free a memzone if it exists.\n+ *\n+ * @param name\n+ * The name of the memzone to lookup and free.\n+ * @return\n+ * A pointer to a correctly-filled read-only memzone descriptor, or NULL\n+ * on error.\n+ * On error case, rte_errno will be set appropriately:\n+ * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ * - ENOSPC - the maximum number of memzones has already been allocated\n+ * - ENOMEM - no appropriate memory area found in which to create memzone\n+ * - EINVAL - invalid parameters\n+ */\n+__rte_experimental\n+int rte_memzone_lookup_free(const char *name);\n+\n /**\n * Lookup for a memzone.\n *\ndiff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map\nindex 6088e7f6c3..c394dc22bc 100644\n--- a/lib/librte_eal/rte_eal_version.map\n+++ b/lib/librte_eal/rte_eal_version.map\n@@ -374,6 +374,10 @@ EXPERIMENTAL {\n \tper_lcore_trace_mem;\n \tper_lcore_trace_point_sz;\n \trte_log_can_log;\n+\trte_memzone_lookup_reserve;\n+\trte_memzone_lookup_reserve_aligned;\n+\trte_memzone_lookup_reserve_bounded;\n+\trte_memzone_lookup_free;\n \trte_thread_getname;\n \trte_trace_dump;\n \trte_trace_is_enabled;\n", "prefixes": [ "RFC" ] }{ "id": 69764, "url": "