get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/39018/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 39018,
    "url": "https://patches.dpdk.org/api/patches/39018/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1524740363-3081-4-git-send-email-arybchenko@solarflare.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1524740363-3081-4-git-send-email-arybchenko@solarflare.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1524740363-3081-4-git-send-email-arybchenko@solarflare.com",
    "date": "2018-04-26T10:59:21",
    "name": "[dpdk-dev,v4,3/5] mempool: support block dequeue operation",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "64568cca0812431fe2b2e6126c21091e5f9a4049",
    "submitter": {
        "id": 607,
        "url": "https://patches.dpdk.org/api/people/607/?format=api",
        "name": "Andrew Rybchenko",
        "email": "arybchenko@solarflare.com"
    },
    "delegate": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/users/1/?format=api",
        "username": "tmonjalo",
        "first_name": "Thomas",
        "last_name": "Monjalon",
        "email": "thomas@monjalon.net"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1524740363-3081-4-git-send-email-arybchenko@solarflare.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/39018/comments/",
    "check": "fail",
    "checks": "https://patches.dpdk.org/api/patches/39018/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id BF8918D87;\n\tThu, 26 Apr 2018 12:59:43 +0200 (CEST)",
            "from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com\n\t[148.163.129.52]) by dpdk.org (Postfix) with ESMTP id F40DC7D00\n\tfor <dev@dpdk.org>; Thu, 26 Apr 2018 12:59:32 +0200 (CEST)",
            "from webmail.solarflare.com (webmail.solarflare.com\n\t[12.187.104.26])\n\t(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with\n\tESMTPS id C6513940073; Thu, 26 Apr 2018 10:59:31 +0000 (UTC)",
            "from ocex03.SolarFlarecom.com (10.20.40.36) by\n\tocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server\n\t(TLS) id 15.0.1044.25; Thu, 26 Apr 2018 03:59:29 -0700",
            "from opal.uk.solarflarecom.com (10.17.10.1) by\n\tocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server\n\t(TLS) id\n\t15.0.1044.25 via Frontend Transport; Thu, 26 Apr 2018 03:59:28 -0700",
            "from uklogin.uk.solarflarecom.com (uklogin.uk.solarflarecom.com\n\t[10.17.10.10])\n\tby opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id\n\tw3QAxRT5008916; Thu, 26 Apr 2018 11:59:27 +0100",
            "from uklogin.uk.solarflarecom.com (localhost.localdomain\n\t[127.0.0.1])\n\tby uklogin.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id\n\tw3QAxRLK003122; Thu, 26 Apr 2018 11:59:27 +0100"
        ],
        "X-Virus-Scanned": "Proofpoint Essentials engine",
        "From": "Andrew Rybchenko <arybchenko@solarflare.com>",
        "To": "<dev@dpdk.org>",
        "CC": "Olivier MATZ <olivier.matz@6wind.com>, \"Artem V. Andreev\"\n\t<Artem.Andreev@oktetlabs.ru>",
        "Date": "Thu, 26 Apr 2018 11:59:21 +0100",
        "Message-ID": "<1524740363-3081-4-git-send-email-arybchenko@solarflare.com>",
        "X-Mailer": "git-send-email 1.8.2.3",
        "In-Reply-To": "<1524740363-3081-1-git-send-email-arybchenko@solarflare.com>",
        "References": "<1511539591-20966-1-git-send-email-arybchenko@solarflare.com>\n\t<1524740363-3081-1-git-send-email-arybchenko@solarflare.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-MDID": "1524740372-tJGnHLhMLzLe",
        "Subject": "[dpdk-dev] [PATCH v4 3/5] mempool: support block dequeue operation",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: \"Artem V. Andreev\" <Artem.Andreev@oktetlabs.ru>\n\nIf mempool manager supports object blocks (physically and virtual\ncontiguous set of objects), it is sufficient to get the first\nobject only and the function allows to avoid filling in of\ninformation about each block member.\n\nSigned-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>\nSigned-off-by: Andrew Rybchenko <arybchenko@solarflare.com>\nAcked-by: Olivier Matz <olivier.matz@6wind.com>\n---\n doc/guides/rel_notes/deprecation.rst       |   7 --\n lib/librte_mempool/Makefile                |   1 +\n lib/librte_mempool/meson.build             |   2 +\n lib/librte_mempool/rte_mempool.c           |  39 +++++++++\n lib/librte_mempool/rte_mempool.h           | 131 +++++++++++++++++++++++++++--\n lib/librte_mempool/rte_mempool_ops.c       |   1 +\n lib/librte_mempool/rte_mempool_version.map |   1 +\n 7 files changed, 170 insertions(+), 12 deletions(-)",
    "diff": "diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst\nindex 72ab33cb7..da156c3cc 100644\n--- a/doc/guides/rel_notes/deprecation.rst\n+++ b/doc/guides/rel_notes/deprecation.rst\n@@ -42,13 +42,6 @@ Deprecation Notices\n \n   - ``rte_eal_mbuf_default_mempool_ops``\n \n-* mempool: several API and ABI changes are planned in v18.05.\n-\n-  The following changes are planned:\n-\n-  - addition of new op to allocate contiguous\n-    block of objects if underlying driver supports it.\n-\n * mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic\n   definition in line with the ethdev TM and MTR APIs. Currently, this field\n   is defined in librte_sched in a non-generic way. The new generic format\ndiff --git a/lib/librte_mempool/Makefile b/lib/librte_mempool/Makefile\nindex 7f19f005a..e3c32b14f 100644\n--- a/lib/librte_mempool/Makefile\n+++ b/lib/librte_mempool/Makefile\n@@ -10,6 +10,7 @@ CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3\n # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()\n # from earlier deprecated rte_mempool_populate_phys_tab()\n CFLAGS += -Wno-deprecated-declarations\n+CFLAGS += -DALLOW_EXPERIMENTAL_API\n LDLIBS += -lrte_eal -lrte_ring\n \n EXPORT_MAP := rte_mempool_version.map\ndiff --git a/lib/librte_mempool/meson.build b/lib/librte_mempool/meson.build\nindex baf2d24d5..d507e5511 100644\n--- a/lib/librte_mempool/meson.build\n+++ b/lib/librte_mempool/meson.build\n@@ -1,6 +1,8 @@\n # SPDX-License-Identifier: BSD-3-Clause\n # Copyright(c) 2017 Intel Corporation\n \n+allow_experimental_apis = true\n+\n extra_flags = []\n \n # Allow deprecated symbol to use deprecated rte_mempool_populate_iova_tab()\ndiff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c\nindex 84b3d640f..cf5d124ec 100644\n--- a/lib/librte_mempool/rte_mempool.c\n+++ b/lib/librte_mempool/rte_mempool.c\n@@ -1255,6 +1255,36 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,\n #endif\n }\n \n+void\n+rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,\n+\tvoid * const *first_obj_table_const, unsigned int n, int free)\n+{\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tstruct rte_mempool_info info;\n+\tconst size_t total_elt_sz =\n+\t\tmp->header_size + mp->elt_size + mp->trailer_size;\n+\tunsigned int i, j;\n+\n+\trte_mempool_ops_get_info(mp, &info);\n+\n+\tfor (i = 0; i < n; ++i) {\n+\t\tvoid *first_obj = first_obj_table_const[i];\n+\n+\t\tfor (j = 0; j < info.contig_block_size; ++j) {\n+\t\t\tvoid *obj;\n+\n+\t\t\tobj = (void *)((uintptr_t)first_obj + j * total_elt_sz);\n+\t\t\trte_mempool_check_cookies(mp, &obj, 1, free);\n+\t\t}\n+\t}\n+#else\n+\tRTE_SET_USED(mp);\n+\tRTE_SET_USED(first_obj_table_const);\n+\tRTE_SET_USED(n);\n+\tRTE_SET_USED(free);\n+#endif\n+}\n+\n #ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n static void\n mempool_obj_audit(struct rte_mempool *mp, __rte_unused void *opaque,\n@@ -1320,6 +1350,7 @@ void\n rte_mempool_dump(FILE *f, struct rte_mempool *mp)\n {\n #ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tstruct rte_mempool_info info;\n \tstruct rte_mempool_debug_stats sum;\n \tunsigned lcore_id;\n #endif\n@@ -1361,6 +1392,7 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)\n \n \t/* sum and dump statistics */\n #ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\trte_mempool_ops_get_info(mp, &info);\n \tmemset(&sum, 0, sizeof(sum));\n \tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n \t\tsum.put_bulk += mp->stats[lcore_id].put_bulk;\n@@ -1369,6 +1401,8 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)\n \t\tsum.get_success_objs += mp->stats[lcore_id].get_success_objs;\n \t\tsum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk;\n \t\tsum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;\n+\t\tsum.get_success_blks += mp->stats[lcore_id].get_success_blks;\n+\t\tsum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;\n \t}\n \tfprintf(f, \"  stats:\\n\");\n \tfprintf(f, \"    put_bulk=%\"PRIu64\"\\n\", sum.put_bulk);\n@@ -1377,6 +1411,11 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp)\n \tfprintf(f, \"    get_success_objs=%\"PRIu64\"\\n\", sum.get_success_objs);\n \tfprintf(f, \"    get_fail_bulk=%\"PRIu64\"\\n\", sum.get_fail_bulk);\n \tfprintf(f, \"    get_fail_objs=%\"PRIu64\"\\n\", sum.get_fail_objs);\n+\tif (info.contig_block_size > 0) {\n+\t\tfprintf(f, \"    get_success_blks=%\"PRIu64\"\\n\",\n+\t\t\tsum.get_success_blks);\n+\t\tfprintf(f, \"    get_fail_blks=%\"PRIu64\"\\n\", sum.get_fail_blks);\n+\t}\n #else\n \tfprintf(f, \"  no statistics available\\n\");\n #endif\ndiff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h\nindex 853f2da4d..1f59553b3 100644\n--- a/lib/librte_mempool/rte_mempool.h\n+++ b/lib/librte_mempool/rte_mempool.h\n@@ -70,6 +70,10 @@ struct rte_mempool_debug_stats {\n \tuint64_t get_success_objs; /**< Objects successfully allocated. */\n \tuint64_t get_fail_bulk;    /**< Failed allocation number. */\n \tuint64_t get_fail_objs;    /**< Objects that failed to be allocated. */\n+\t/** Successful allocation number of contiguous blocks. */\n+\tuint64_t get_success_blks;\n+\t/** Failed allocation number of contiguous blocks. */\n+\tuint64_t get_fail_blks;\n } __rte_cache_aligned;\n #endif\n \n@@ -199,11 +203,8 @@ struct rte_mempool_memhdr {\n  * a number of cases when something small is added.\n  */\n struct rte_mempool_info {\n-\t/*\n-\t * Dummy structure member to make it non emtpy until the first\n-\t * real member is added.\n-\t */\n-\tunsigned int dummy;\n+\t/** Number of objects in the contiguous block */\n+\tunsigned int contig_block_size;\n } __rte_cache_aligned;\n \n /**\n@@ -282,8 +283,16 @@ struct rte_mempool {\n \t\t\tmp->stats[__lcore_id].name##_bulk += 1;\t\\\n \t\t}                                               \\\n \t} while(0)\n+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {                    \\\n+\t\tunsigned int __lcore_id = rte_lcore_id();       \\\n+\t\tif (__lcore_id < RTE_MAX_LCORE) {               \\\n+\t\t\tmp->stats[__lcore_id].name##_blks += n;\t\\\n+\t\t\tmp->stats[__lcore_id].name##_bulk += 1;\t\\\n+\t\t}                                               \\\n+\t} while (0)\n #else\n #define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0)\n+#define __MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, name, n) do {} while (0)\n #endif\n \n /**\n@@ -351,6 +360,38 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,\n #define __mempool_check_cookies(mp, obj_table_const, n, free) do {} while(0)\n #endif /* RTE_LIBRTE_MEMPOOL_DEBUG */\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * @internal Check contiguous object blocks and update cookies or panic.\n+ *\n+ * @param mp\n+ *   Pointer to the memory pool.\n+ * @param first_obj_table_const\n+ *   Pointer to a table of void * pointers (first object of the contiguous\n+ *   object blocks).\n+ * @param n\n+ *   Number of contiguous object blocks.\n+ * @param free\n+ *   - 0: object is supposed to be allocated, mark it as free\n+ *   - 1: object is supposed to be free, mark it as allocated\n+ *   - 2: just check that cookie is valid (free or allocated)\n+ */\n+void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,\n+\tvoid * const *first_obj_table_const, unsigned int n, int free);\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \\\n+\t\t\t\t\t      free) \\\n+\trte_mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \\\n+\t\t\t\t\t\tfree)\n+#else\n+#define __mempool_contig_blocks_check_cookies(mp, first_obj_table_const, n, \\\n+\t\t\t\t\t      free) \\\n+\tdo {} while (0)\n+#endif /* RTE_LIBRTE_MEMPOOL_DEBUG */\n+\n #define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */\n \n /**\n@@ -382,6 +423,15 @@ typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,\n typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,\n \t\tvoid **obj_table, unsigned int n);\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Dequeue a number of contiquous object blocks from the external pool.\n+ */\n+typedef int (*rte_mempool_dequeue_contig_blocks_t)(struct rte_mempool *mp,\n+\t\t void **first_obj_table, unsigned int n);\n+\n /**\n  * Return the number of available objects in the external pool.\n  */\n@@ -548,6 +598,10 @@ struct rte_mempool_ops {\n \t * Get mempool info\n \t */\n \trte_mempool_get_info_t get_info;\n+\t/**\n+\t * Dequeue a number of contiguous object blocks.\n+\t */\n+\trte_mempool_dequeue_contig_blocks_t dequeue_contig_blocks;\n } __rte_cache_aligned;\n \n #define RTE_MEMPOOL_MAX_OPS_IDX 16  /**< Max registered ops structs */\n@@ -625,6 +679,30 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,\n \treturn ops->dequeue(mp, obj_table, n);\n }\n \n+/**\n+ * @internal Wrapper for mempool_ops dequeue_contig_blocks callback.\n+ *\n+ * @param[in] mp\n+ *   Pointer to the memory pool.\n+ * @param[out] first_obj_table\n+ *   Pointer to a table of void * pointers (first objects).\n+ * @param[in] n\n+ *   Number of blocks to get.\n+ * @return\n+ *   - 0: Success; got n objects.\n+ *   - <0: Error; code of dequeue function.\n+ */\n+static inline int\n+rte_mempool_ops_dequeue_contig_blocks(struct rte_mempool *mp,\n+\t\tvoid **first_obj_table, unsigned int n)\n+{\n+\tstruct rte_mempool_ops *ops;\n+\n+\tops = rte_mempool_get_ops(mp->ops_index);\n+\tRTE_ASSERT(ops->dequeue_contig_blocks != NULL);\n+\treturn ops->dequeue_contig_blocks(mp, first_obj_table, n);\n+}\n+\n /**\n  * @internal wrapper for mempool_ops enqueue callback.\n  *\n@@ -1539,6 +1617,49 @@ rte_mempool_get(struct rte_mempool *mp, void **obj_p)\n \treturn rte_mempool_get_bulk(mp, obj_p, 1);\n }\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice.\n+ *\n+ * Get a contiguous blocks of objects from the mempool.\n+ *\n+ * If cache is enabled, consider to flush it first, to reuse objects\n+ * as soon as possible.\n+ *\n+ * The application should check that the driver supports the operation\n+ * by calling rte_mempool_ops_get_info() and checking that `contig_block_size`\n+ * is not zero.\n+ *\n+ * @param mp\n+ *   A pointer to the mempool structure.\n+ * @param first_obj_table\n+ *   A pointer to a pointer to the first object in each block.\n+ * @param n\n+ *   The number of blocks to get from mempool.\n+ * @return\n+ *   - 0: Success; blocks taken.\n+ *   - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.\n+ *   - -EOPNOTSUPP: The mempool driver does not support block dequeue\n+ */\n+static __rte_always_inline int\n+__rte_experimental\n+rte_mempool_get_contig_blocks(struct rte_mempool *mp,\n+\t\t\t      void **first_obj_table, unsigned int n)\n+{\n+\tint ret;\n+\n+\tret = rte_mempool_ops_dequeue_contig_blocks(mp, first_obj_table, n);\n+\tif (ret == 0) {\n+\t\t__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_success, n);\n+\t\t__mempool_contig_blocks_check_cookies(mp, first_obj_table, n,\n+\t\t\t\t\t\t      1);\n+\t} else {\n+\t\t__MEMPOOL_CONTIG_BLOCKS_STAT_ADD(mp, get_fail, n);\n+\t}\n+\n+\treturn ret;\n+}\n+\n /**\n  * Return the number of entries in the mempool.\n  *\ndiff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c\nindex efc1c084c..a27e1fa51 100644\n--- a/lib/librte_mempool/rte_mempool_ops.c\n+++ b/lib/librte_mempool/rte_mempool_ops.c\n@@ -60,6 +60,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)\n \tops->calc_mem_size = h->calc_mem_size;\n \tops->populate = h->populate;\n \tops->get_info = h->get_info;\n+\tops->dequeue_contig_blocks = h->dequeue_contig_blocks;\n \n \trte_spinlock_unlock(&rte_mempool_ops_table.sl);\n \ndiff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map\nindex c9d16ecc4..1c406b5b0 100644\n--- a/lib/librte_mempool/rte_mempool_version.map\n+++ b/lib/librte_mempool/rte_mempool_version.map\n@@ -53,6 +53,7 @@ DPDK_17.11 {\n DPDK_18.05 {\n \tglobal:\n \n+\trte_mempool_contig_blocks_check_cookies;\n \trte_mempool_op_calc_mem_size_default;\n \trte_mempool_op_populate_default;\n \n",
    "prefixes": [
        "dpdk-dev",
        "v4",
        "3/5"
    ]
}