get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/123591/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 123591,
    "url": "http://patches.dpdk.org/api/patches/123591/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20230209145232.129844-1-mb@smartsharesystems.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20230209145232.129844-1-mb@smartsharesystems.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20230209145232.129844-1-mb@smartsharesystems.com",
    "date": "2023-02-09T14:52:32",
    "name": "[v7] mempool cache: add zero-copy get and put functions",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "ed2f2e302b629b3a4f954b791cd00f97d94d3780",
    "submitter": {
        "id": 591,
        "url": "http://patches.dpdk.org/api/people/591/?format=api",
        "name": "Morten Brørup",
        "email": "mb@smartsharesystems.com"
    },
    "delegate": {
        "id": 24651,
        "url": "http://patches.dpdk.org/api/users/24651/?format=api",
        "username": "dmarchand",
        "first_name": "David",
        "last_name": "Marchand",
        "email": "david.marchand@redhat.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20230209145232.129844-1-mb@smartsharesystems.com/mbox/",
    "series": [
        {
            "id": 26927,
            "url": "http://patches.dpdk.org/api/series/26927/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=26927",
            "date": "2023-02-09T14:52:32",
            "name": "[v7] mempool cache: add zero-copy get and put functions",
            "version": 7,
            "mbox": "http://patches.dpdk.org/series/26927/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/123591/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/123591/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 6172341C52;\n\tThu,  9 Feb 2023 15:52:36 +0100 (CET)",
            "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id C82A8410EA;\n\tThu,  9 Feb 2023 15:52:35 +0100 (CET)",
            "from smartserver.smartsharesystems.com\n (smartserver.smartsharesystems.com [77.243.40.215])\n by mails.dpdk.org (Postfix) with ESMTP id F02D040EDC\n for <dev@dpdk.org>; Thu,  9 Feb 2023 15:52:34 +0100 (CET)",
            "from dkrd2.smartsharesys.local ([192.168.4.12]) by\n smartserver.smartsharesystems.com with Microsoft SMTPSVC(6.0.3790.4675);\n Thu, 9 Feb 2023 15:52:33 +0100"
        ],
        "From": "=?utf-8?q?Morten_Br=C3=B8rup?= <mb@smartsharesystems.com>",
        "To": "olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru,\n honnappa.nagarahalli@arm.com, kamalakshitha.aligeri@arm.com,\n bruce.richardson@intel.com, konstantin.ananyev@huawei.com, dev@dpdk.org",
        "Cc": "nd@arm.com, david.marchand@redhat.com,\n =?utf-8?q?Morten_Br=C3=B8rup?= <mb@smartsharesystems.com>",
        "Subject": "[PATCH v7] mempool cache: add zero-copy get and put functions",
        "Date": "Thu,  9 Feb 2023 15:52:32 +0100",
        "Message-Id": "<20230209145232.129844-1-mb@smartsharesystems.com>",
        "X-Mailer": "git-send-email 2.17.1",
        "In-Reply-To": "\n <98CBD80474FA8B44BF855DF32C47DC35D87488@smartserver.smartshare.dk>",
        "References": "<98CBD80474FA8B44BF855DF32C47DC35D87488@smartserver.smartshare.dk>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "X-OriginalArrivalTime": "09 Feb 2023 14:52:33.0709 (UTC)\n FILETIME=[245EBDD0:01D93C96]",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org"
    },
    "content": "Zero-copy access to mempool caches is beneficial for PMD performance, and\nmust be provided by the mempool library to fix [Bug 1052] without a\nperformance regression.\n\n[Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052\n\nBugzilla ID: 1052\n\nSigned-off-by: Morten Brørup <mb@smartsharesystems.com>\nAcked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>\n\nv7:\n* Fix typo in function description. (checkpatch)\n* Zero-copy functions may set rte_errno; include rte_errno header file.\n  (ci/loongarch-compilation)\nv6:\n* Improve description of the 'n' parameter to the zero-copy get function.\n  (Konstantin, Bruce)\n* The caches used for zero-copy may not be user-owned, so remove this word\n  from the function descriptions. (Kamalakshitha)\nv5:\n* Bugfix: Compare zero-copy get request to the cache size instead of the\n  flush threshold; otherwise refill could overflow the memory allocated\n  for the cache. (Andrew)\n* Split the zero-copy put function into an internal function doing the\n  work, and a public function with trace.\n* Avoid code duplication by rewriting rte_mempool_do_generic_put() to use\n  the internal zero-copy put function. (Andrew)\n* Corrected the return type of rte_mempool_cache_zc_put_bulk() from void *\n  to void **; it returns a pointer to an array of objects.\n* Fix coding style: Add missing curly brackets. (Andrew)\nv4:\n* Fix checkpatch warnings.\nv3:\n* Bugfix: Respect the cache size; compare to the flush threshold instead\n  of RTE_MEMPOOL_CACHE_MAX_SIZE.\n* Added 'rewind' function for incomplete 'put' operations. (Konstantin)\n* Replace RTE_ASSERTs with runtime checks of the request size.\n  Instead of failing, return NULL if the request is too big. (Konstantin)\n* Modified comparison to prevent overflow if n is really huge and len is\n  non-zero.\n* Updated the comments in the code.\nv2:\n* Fix checkpatch warnings.\n* Fix missing registration of trace points.\n* The functions are inline, so they don't go into the map file.\nv1 changes from the RFC:\n* Removed run-time parameter checks. (Honnappa)\n  This is a hot fast path function; requiring correct application\n  behaviour, i.e. function parameters must be valid.\n* Added RTE_ASSERT for parameters instead.\n  Code for this is only generated if built with RTE_ENABLE_ASSERT.\n* Removed fallback when 'cache' parameter is not set. (Honnappa)\n* Chose the simple get function; i.e. do not move the existing objects in\n  the cache to the top of the new stack, just leave them at the bottom.\n* Renamed the functions. Other suggestions are welcome, of course. ;-)\n* Updated the function descriptions.\n* Added the functions to trace_fp and version.map.\n---\n lib/mempool/mempool_trace_points.c |   9 ++\n lib/mempool/rte_mempool.h          | 237 +++++++++++++++++++++++++----\n lib/mempool/rte_mempool_trace_fp.h |  23 +++\n lib/mempool/version.map            |   5 +\n 4 files changed, 245 insertions(+), 29 deletions(-)",
    "diff": "diff --git a/lib/mempool/mempool_trace_points.c b/lib/mempool/mempool_trace_points.c\nindex 4ad76deb34..83d353a764 100644\n--- a/lib/mempool/mempool_trace_points.c\n+++ b/lib/mempool/mempool_trace_points.c\n@@ -77,3 +77,12 @@ RTE_TRACE_POINT_REGISTER(rte_mempool_trace_ops_free,\n \n RTE_TRACE_POINT_REGISTER(rte_mempool_trace_set_ops_byname,\n \tlib.mempool.set.ops.byname)\n+\n+RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_put_bulk,\n+\tlib.mempool.cache.zc.put.bulk)\n+\n+RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_put_rewind,\n+\tlib.mempool.cache.zc.put.rewind)\n+\n+RTE_TRACE_POINT_REGISTER(rte_mempool_trace_cache_zc_get_bulk,\n+\tlib.mempool.cache.zc.get.bulk)\ndiff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h\nindex 9f530db24b..15bc0af92e 100644\n--- a/lib/mempool/rte_mempool.h\n+++ b/lib/mempool/rte_mempool.h\n@@ -1346,6 +1346,198 @@ rte_mempool_cache_flush(struct rte_mempool_cache *cache,\n \tcache->len = 0;\n }\n \n+\n+/**\n+ * @internal used by rte_mempool_cache_zc_put_bulk() and rte_mempool_do_generic_put().\n+ *\n+ * Zero-copy put objects in a mempool cache backed by the specified mempool.\n+ *\n+ * @param cache\n+ *   A pointer to the mempool cache.\n+ * @param mp\n+ *   A pointer to the mempool.\n+ * @param n\n+ *   The number of objects to be put in the mempool cache.\n+ * @return\n+ *   The pointer to where to put the objects in the mempool cache.\n+ *   NULL if the request itself is too big for the cache, i.e.\n+ *   exceeds the cache flush threshold.\n+ */\n+static __rte_always_inline void **\n+__rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache,\n+\t\tstruct rte_mempool *mp,\n+\t\tunsigned int n)\n+{\n+\tvoid **cache_objs;\n+\n+\tRTE_ASSERT(cache != NULL);\n+\tRTE_ASSERT(mp != NULL);\n+\n+\tif (n <= cache->flushthresh - cache->len) {\n+\t\t/*\n+\t\t * The objects can be added to the cache without crossing the\n+\t\t * flush threshold.\n+\t\t */\n+\t\tcache_objs = &cache->objs[cache->len];\n+\t\tcache->len += n;\n+\t} else if (likely(n <= cache->flushthresh)) {\n+\t\t/*\n+\t\t * The request itself fits into the cache.\n+\t\t * But first, the cache must be flushed to the backend, so\n+\t\t * adding the objects does not cross the flush threshold.\n+\t\t */\n+\t\tcache_objs = &cache->objs[0];\n+\t\trte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);\n+\t\tcache->len = n;\n+\t} else {\n+\t\t/* The request itself is too big for the cache. */\n+\t\treturn NULL;\n+\t}\n+\n+\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);\n+\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);\n+\n+\treturn cache_objs;\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.\n+ *\n+ * Zero-copy put objects in a mempool cache backed by the specified mempool.\n+ *\n+ * @param cache\n+ *   A pointer to the mempool cache.\n+ * @param mp\n+ *   A pointer to the mempool.\n+ * @param n\n+ *   The number of objects to be put in the mempool cache.\n+ * @return\n+ *   The pointer to where to put the objects in the mempool cache.\n+ *   NULL if the request itself is too big for the cache, i.e.\n+ *   exceeds the cache flush threshold.\n+ */\n+__rte_experimental\n+static __rte_always_inline void **\n+rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache,\n+\t\tstruct rte_mempool *mp,\n+\t\tunsigned int n)\n+{\n+\tRTE_ASSERT(cache != NULL);\n+\tRTE_ASSERT(mp != NULL);\n+\n+\trte_mempool_trace_cache_zc_put_bulk(cache, mp, n);\n+\treturn __rte_mempool_cache_zc_put_bulk(cache, mp, n);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.\n+ *\n+ * Zero-copy un-put objects in a mempool cache.\n+ *\n+ * @param cache\n+ *   A pointer to the mempool cache.\n+ * @param n\n+ *   The number of objects not put in the mempool cache after calling\n+ *   rte_mempool_cache_zc_put_bulk().\n+ */\n+__rte_experimental\n+static __rte_always_inline void\n+rte_mempool_cache_zc_put_rewind(struct rte_mempool_cache *cache,\n+\t\tunsigned int n)\n+{\n+\tRTE_ASSERT(cache != NULL);\n+\tRTE_ASSERT(n <= cache->len);\n+\n+\trte_mempool_trace_cache_zc_put_rewind(cache, n);\n+\n+\tcache->len -= n;\n+\n+\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, (int)-n);\n+}\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: This API may change, or be removed, without prior notice.\n+ *\n+ * Zero-copy get objects from a mempool cache backed by the specified mempool.\n+ *\n+ * @param cache\n+ *   A pointer to the mempool cache.\n+ * @param mp\n+ *   A pointer to the mempool.\n+ * @param n\n+ *   The number of objects to be made available for extraction from the mempool cache.\n+ * @return\n+ *   The pointer to the objects in the mempool cache.\n+ *   NULL on error; i.e. the cache + the pool does not contain 'n' objects.\n+ *   With rte_errno set to the error code of the mempool dequeue function,\n+ *   or EINVAL if the request itself is too big for the cache, i.e.\n+ *   exceeds the cache flush threshold.\n+ */\n+__rte_experimental\n+static __rte_always_inline void *\n+rte_mempool_cache_zc_get_bulk(struct rte_mempool_cache *cache,\n+\t\tstruct rte_mempool *mp,\n+\t\tunsigned int n)\n+{\n+\tunsigned int len, size;\n+\n+\tRTE_ASSERT(cache != NULL);\n+\tRTE_ASSERT(mp != NULL);\n+\n+\trte_mempool_trace_cache_zc_get_bulk(cache, mp, n);\n+\n+\tlen = cache->len;\n+\tsize = cache->size;\n+\n+\tif (n <= len) {\n+\t\t/* The request can be satisfied from the cache as is. */\n+\t\tlen -= n;\n+\t} else if (likely(n <= size)) {\n+\t\t/*\n+\t\t * The request itself can be satisfied from the cache.\n+\t\t * But first, the cache must be filled from the backend;\n+\t\t * fetch size + requested - len objects.\n+\t\t */\n+\t\tint ret;\n+\n+\t\tret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], size + n - len);\n+\t\tif (unlikely(ret < 0)) {\n+\t\t\t/*\n+\t\t\t * We are buffer constrained.\n+\t\t\t * Do not fill the cache, just satisfy the request.\n+\t\t\t */\n+\t\t\tret = rte_mempool_ops_dequeue_bulk(mp, &cache->objs[len], n - len);\n+\t\t\tif (unlikely(ret < 0)) {\n+\t\t\t\t/* Unable to satisfy the request. */\n+\n+\t\t\t\tRTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);\n+\t\t\t\tRTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);\n+\n+\t\t\t\trte_errno = -ret;\n+\t\t\t\treturn NULL;\n+\t\t\t}\n+\n+\t\t\tlen = 0;\n+\t\t} else {\n+\t\t\tlen = size;\n+\t\t}\n+\t} else {\n+\t\t/* The request itself is too big for the cache. */\n+\t\trte_errno = EINVAL;\n+\t\treturn NULL;\n+\t}\n+\n+\tcache->len = len;\n+\n+\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_bulk, 1);\n+\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, get_success_objs, n);\n+\n+\treturn &cache->objs[len];\n+}\n+\n /**\n  * @internal Put several objects back in the mempool; used internally.\n  * @param mp\n@@ -1364,32 +1556,25 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,\n {\n \tvoid **cache_objs;\n \n-\t/* No cache provided */\n-\tif (unlikely(cache == NULL))\n-\t\tgoto driver_enqueue;\n+\t/* No cache provided? */\n+\tif (unlikely(cache == NULL)) {\n+\t\t/* Increment stats now, adding in mempool always succeeds. */\n+\t\tRTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);\n+\t\tRTE_MEMPOOL_STAT_ADD(mp, put_objs, n);\n \n-\t/* increment stat now, adding in mempool always success */\n-\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);\n-\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);\n+\t\tgoto driver_enqueue;\n+\t}\n \n-\t/* The request itself is too big for the cache */\n-\tif (unlikely(n > cache->flushthresh))\n-\t\tgoto driver_enqueue_stats_incremented;\n+\t/* Prepare to add the objects to the cache. */\n+\tcache_objs = __rte_mempool_cache_zc_put_bulk(cache, mp, n);\n \n-\t/*\n-\t * The cache follows the following algorithm:\n-\t *   1. If the objects cannot be added to the cache without crossing\n-\t *      the flush threshold, flush the cache to the backend.\n-\t *   2. Add the objects to the cache.\n-\t */\n+\t/* The request itself is too big for the cache? */\n+\tif (unlikely(cache_objs == NULL)) {\n+\t\t/* Increment stats now, adding in mempool always succeeds. */\n+\t\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);\n+\t\tRTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);\n \n-\tif (cache->len + n <= cache->flushthresh) {\n-\t\tcache_objs = &cache->objs[cache->len];\n-\t\tcache->len += n;\n-\t} else {\n-\t\tcache_objs = &cache->objs[0];\n-\t\trte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);\n-\t\tcache->len = n;\n+\t\tgoto driver_enqueue;\n \t}\n \n \t/* Add the objects to the cache. */\n@@ -1399,13 +1584,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,\n \n driver_enqueue:\n \n-\t/* increment stat now, adding in mempool always success */\n-\tRTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);\n-\tRTE_MEMPOOL_STAT_ADD(mp, put_objs, n);\n-\n-driver_enqueue_stats_incremented:\n-\n-\t/* push objects to the backend */\n+\t/* Push the objects to the backend. */\n \trte_mempool_ops_enqueue_bulk(mp, obj_table, n);\n }\n \ndiff --git a/lib/mempool/rte_mempool_trace_fp.h b/lib/mempool/rte_mempool_trace_fp.h\nindex ed060e887c..14666457f7 100644\n--- a/lib/mempool/rte_mempool_trace_fp.h\n+++ b/lib/mempool/rte_mempool_trace_fp.h\n@@ -109,6 +109,29 @@ RTE_TRACE_POINT_FP(\n \trte_trace_point_emit_ptr(mempool);\n )\n \n+RTE_TRACE_POINT_FP(\n+\trte_mempool_trace_cache_zc_put_bulk,\n+\tRTE_TRACE_POINT_ARGS(void *cache, void *mempool, uint32_t nb_objs),\n+\trte_trace_point_emit_ptr(cache);\n+\trte_trace_point_emit_ptr(mempool);\n+\trte_trace_point_emit_u32(nb_objs);\n+)\n+\n+RTE_TRACE_POINT_FP(\n+\trte_mempool_trace_cache_zc_put_rewind,\n+\tRTE_TRACE_POINT_ARGS(void *cache, uint32_t nb_objs),\n+\trte_trace_point_emit_ptr(cache);\n+\trte_trace_point_emit_u32(nb_objs);\n+)\n+\n+RTE_TRACE_POINT_FP(\n+\trte_mempool_trace_cache_zc_get_bulk,\n+\tRTE_TRACE_POINT_ARGS(void *cache, void *mempool, uint32_t nb_objs),\n+\trte_trace_point_emit_ptr(cache);\n+\trte_trace_point_emit_ptr(mempool);\n+\trte_trace_point_emit_u32(nb_objs);\n+)\n+\n #ifdef __cplusplus\n }\n #endif\ndiff --git a/lib/mempool/version.map b/lib/mempool/version.map\nindex b67d7aace7..1383ae6db2 100644\n--- a/lib/mempool/version.map\n+++ b/lib/mempool/version.map\n@@ -63,6 +63,11 @@ EXPERIMENTAL {\n \t__rte_mempool_trace_ops_alloc;\n \t__rte_mempool_trace_ops_free;\n \t__rte_mempool_trace_set_ops_byname;\n+\n+\t# added in 23.03\n+\t__rte_mempool_trace_cache_zc_put_bulk;\n+\t__rte_mempool_trace_cache_zc_put_rewind;\n+\t__rte_mempool_trace_cache_zc_get_bulk;\n };\n \n INTERNAL {\n",
    "prefixes": [
        "v7"
    ]
}