get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/38963/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 38963,
    "url": "http://patches.dpdk.org/api/patches/38963/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1524673942-22726-2-git-send-email-arybchenko@solarflare.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1524673942-22726-2-git-send-email-arybchenko@solarflare.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1524673942-22726-2-git-send-email-arybchenko@solarflare.com",
    "date": "2018-04-25T16:32:17",
    "name": "[dpdk-dev,v3,1/6] mempool/bucket: implement bucket mempool manager",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "8c5828a0680970b4d93e6bf28bc4b56d628d93ca",
    "submitter": {
        "id": 607,
        "url": "http://patches.dpdk.org/api/people/607/?format=api",
        "name": "Andrew Rybchenko",
        "email": "arybchenko@solarflare.com"
    },
    "delegate": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/users/1/?format=api",
        "username": "tmonjalo",
        "first_name": "Thomas",
        "last_name": "Monjalon",
        "email": "thomas@monjalon.net"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1524673942-22726-2-git-send-email-arybchenko@solarflare.com/mbox/",
    "series": [],
    "comments": "http://patches.dpdk.org/api/patches/38963/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/38963/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 8DC19AACD;\n\tWed, 25 Apr 2018 18:32:51 +0200 (CEST)",
            "from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com\n\t[67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 6C188AA9E\n\tfor <dev@dpdk.org>; Wed, 25 Apr 2018 18:32:39 +0200 (CEST)",
            "from webmail.solarflare.com (webmail.solarflare.com\n\t[12.187.104.26])\n\t(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with\n\tESMTPS id 1965F6C006D; Wed, 25 Apr 2018 16:32:38 +0000 (UTC)",
            "from sfocexch01r.SolarFlarecom.com (10.20.40.34) by\n\tocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server\n\t(TLS) id 15.0.1044.25; Wed, 25 Apr 2018 09:32:35 -0700",
            "from ocex03.SolarFlarecom.com (10.20.40.36) by\n\tsfocexch01r.SolarFlarecom.com (10.20.40.34) with Microsoft SMTP\n\tServer (TLS) id 15.0.1044.25; Wed, 25 Apr 2018 09:32:30 -0700",
            "from opal.uk.solarflarecom.com (10.17.10.1) by\n\tocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server\n\t(TLS) id\n\t15.0.1044.25 via Frontend Transport; Wed, 25 Apr 2018 09:32:29 -0700",
            "from uklogin.uk.solarflarecom.com (uklogin.uk.solarflarecom.com\n\t[10.17.10.10])\n\tby opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id\n\tw3PGWSvs003745; Wed, 25 Apr 2018 17:32:28 +0100",
            "from uklogin.uk.solarflarecom.com (localhost.localdomain\n\t[127.0.0.1])\n\tby uklogin.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id\n\tw3PGWSgo022770; Wed, 25 Apr 2018 17:32:28 +0100"
        ],
        "X-Virus-Scanned": "Proofpoint Essentials engine",
        "From": "Andrew Rybchenko <arybchenko@solarflare.com>",
        "To": "<dev@dpdk.org>",
        "CC": "Olivier MATZ <olivier.matz@6wind.com>, \"Artem V. Andreev\"\n\t<Artem.Andreev@oktetlabs.ru>",
        "Date": "Wed, 25 Apr 2018 17:32:17 +0100",
        "Message-ID": "<1524673942-22726-2-git-send-email-arybchenko@solarflare.com>",
        "X-Mailer": "git-send-email 1.8.2.3",
        "In-Reply-To": "<1524673942-22726-1-git-send-email-arybchenko@solarflare.com>",
        "References": "<1511539591-20966-1-git-send-email-arybchenko@solarflare.com>\n\t<1524673942-22726-1-git-send-email-arybchenko@solarflare.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-MDID": "1524673958-uXEcBvWfynC4",
        "Subject": "[dpdk-dev] [PATCH v3 1/6] mempool/bucket: implement bucket mempool\n\tmanager",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: \"Artem V. Andreev\" <Artem.Andreev@oktetlabs.ru>\n\nThe manager provides a way to allocate physically and virtually\ncontiguous set of objects.\n\nSigned-off-by: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>\nSigned-off-by: Andrew Rybchenko <arybchenko@solarflare.com>\n---\n MAINTAINERS                                        |   9 +\n config/common_base                                 |   2 +\n drivers/mempool/Makefile                           |   1 +\n drivers/mempool/bucket/Makefile                    |  27 +\n drivers/mempool/bucket/meson.build                 |   9 +\n drivers/mempool/bucket/rte_mempool_bucket.c        | 563 +++++++++++++++++++++\n .../mempool/bucket/rte_mempool_bucket_version.map  |   4 +\n mk/rte.app.mk                                      |   1 +\n 8 files changed, 616 insertions(+)\n create mode 100644 drivers/mempool/bucket/Makefile\n create mode 100644 drivers/mempool/bucket/meson.build\n create mode 100644 drivers/mempool/bucket/rte_mempool_bucket.c\n create mode 100644 drivers/mempool/bucket/rte_mempool_bucket_version.map",
    "diff": "diff --git a/MAINTAINERS b/MAINTAINERS\nindex ec0b4845f..97dd70782 100644\n--- a/MAINTAINERS\n+++ b/MAINTAINERS\n@@ -364,6 +364,15 @@ F: test/test/test_rawdev.c\n F: doc/guides/prog_guide/rawdev.rst\n \n \n+Memory Pool Drivers\n+-------------------\n+\n+Bucket memory pool\n+M: Artem V. Andreev <Artem.Andreev@oktetlabs.ru>\n+M: Andrew Rybchenko <arybchenko@solarflare.com>\n+F: drivers/mempool/bucket/\n+\n+\n Bus Drivers\n -----------\n \ndiff --git a/config/common_base b/config/common_base\nindex 2787eb66e..03a8688b5 100644\n--- a/config/common_base\n+++ b/config/common_base\n@@ -633,6 +633,8 @@ CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG=n\n #\n # Compile Mempool drivers\n #\n+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET=y\n+CONFIG_RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB=64\n CONFIG_RTE_DRIVER_MEMPOOL_RING=y\n CONFIG_RTE_DRIVER_MEMPOOL_STACK=y\n \ndiff --git a/drivers/mempool/Makefile b/drivers/mempool/Makefile\nindex fc8b73b38..28c2e8360 100644\n--- a/drivers/mempool/Makefile\n+++ b/drivers/mempool/Makefile\n@@ -3,6 +3,7 @@\n \n include $(RTE_SDK)/mk/rte.vars.mk\n \n+DIRS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += bucket\n ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)\n DIRS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL) += dpaa\n endif\ndiff --git a/drivers/mempool/bucket/Makefile b/drivers/mempool/bucket/Makefile\nnew file mode 100644\nindex 000000000..7364916bc\n--- /dev/null\n+++ b/drivers/mempool/bucket/Makefile\n@@ -0,0 +1,27 @@\n+# SPDX-License-Identifier: BSD-3-Clause\n+#\n+# Copyright (c) 2017-2018 Solarflare Communications Inc.\n+# All rights reserved.\n+#\n+# This software was jointly developed between OKTET Labs (under contract\n+# for Solarflare) and Solarflare Communications, Inc.\n+\n+include $(RTE_SDK)/mk/rte.vars.mk\n+\n+#\n+# library name\n+#\n+LIB = librte_mempool_bucket.a\n+\n+CFLAGS += -O3\n+CFLAGS += $(WERROR_FLAGS)\n+\n+LDLIBS += -lrte_eal -lrte_mempool -lrte_ring\n+\n+EXPORT_MAP := rte_mempool_bucket_version.map\n+\n+LIBABIVER := 1\n+\n+SRCS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += rte_mempool_bucket.c\n+\n+include $(RTE_SDK)/mk/rte.lib.mk\ndiff --git a/drivers/mempool/bucket/meson.build b/drivers/mempool/bucket/meson.build\nnew file mode 100644\nindex 000000000..618d79128\n--- /dev/null\n+++ b/drivers/mempool/bucket/meson.build\n@@ -0,0 +1,9 @@\n+# SPDX-License-Identifier: BSD-3-Clause\n+#\n+# Copyright (c) 2017-2018 Solarflare Communications Inc.\n+# All rights reserved.\n+#\n+# This software was jointly developed between OKTET Labs (under contract\n+# for Solarflare) and Solarflare Communications, Inc.\n+\n+sources = files('rte_mempool_bucket.c')\ndiff --git a/drivers/mempool/bucket/rte_mempool_bucket.c b/drivers/mempool/bucket/rte_mempool_bucket.c\nnew file mode 100644\nindex 000000000..ef822eb2a\n--- /dev/null\n+++ b/drivers/mempool/bucket/rte_mempool_bucket.c\n@@ -0,0 +1,563 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ *\n+ * Copyright (c) 2017-2018 Solarflare Communications Inc.\n+ * All rights reserved.\n+ *\n+ * This software was jointly developed between OKTET Labs (under contract\n+ * for Solarflare) and Solarflare Communications, Inc.\n+ */\n+\n+#include <stdbool.h>\n+#include <stdio.h>\n+#include <string.h>\n+\n+#include <rte_errno.h>\n+#include <rte_ring.h>\n+#include <rte_mempool.h>\n+#include <rte_malloc.h>\n+\n+/*\n+ * The general idea of the bucket mempool driver is as follows.\n+ * We keep track of physically contiguous groups (buckets) of objects\n+ * of a certain size. Every such a group has a counter that is\n+ * incremented every time an object from that group is enqueued.\n+ * Until the bucket is full, no objects from it are eligible for allocation.\n+ * If a request is made to dequeue a multiply of bucket size, it is\n+ * satisfied by returning the whole buckets, instead of separate objects.\n+ */\n+\n+\n+struct bucket_header {\n+\tunsigned int lcore_id;\n+\tuint8_t fill_cnt;\n+};\n+\n+struct bucket_stack {\n+\tunsigned int top;\n+\tunsigned int limit;\n+\tvoid *objects[];\n+};\n+\n+struct bucket_data {\n+\tunsigned int header_size;\n+\tunsigned int total_elt_size;\n+\tunsigned int obj_per_bucket;\n+\tuintptr_t bucket_page_mask;\n+\tstruct rte_ring *shared_bucket_ring;\n+\tstruct bucket_stack *buckets[RTE_MAX_LCORE];\n+\t/*\n+\t * Multi-producer single-consumer ring to hold objects that are\n+\t * returned to the mempool at a different lcore than initially\n+\t * dequeued\n+\t */\n+\tstruct rte_ring *adoption_buffer_rings[RTE_MAX_LCORE];\n+\tstruct rte_ring *shared_orphan_ring;\n+\tstruct rte_mempool *pool;\n+\tunsigned int bucket_mem_size;\n+};\n+\n+static struct bucket_stack *\n+bucket_stack_create(const struct rte_mempool *mp, unsigned int n_elts)\n+{\n+\tstruct bucket_stack *stack;\n+\n+\tstack = rte_zmalloc_socket(\"bucket_stack\",\n+\t\t\t\t   sizeof(struct bucket_stack) +\n+\t\t\t\t   n_elts * sizeof(void *),\n+\t\t\t\t   RTE_CACHE_LINE_SIZE,\n+\t\t\t\t   mp->socket_id);\n+\tif (stack == NULL)\n+\t\treturn NULL;\n+\tstack->limit = n_elts;\n+\tstack->top = 0;\n+\n+\treturn stack;\n+}\n+\n+static void\n+bucket_stack_push(struct bucket_stack *stack, void *obj)\n+{\n+\tRTE_ASSERT(stack->top < stack->limit);\n+\tstack->objects[stack->top++] = obj;\n+}\n+\n+static void *\n+bucket_stack_pop_unsafe(struct bucket_stack *stack)\n+{\n+\tRTE_ASSERT(stack->top > 0);\n+\treturn stack->objects[--stack->top];\n+}\n+\n+static void *\n+bucket_stack_pop(struct bucket_stack *stack)\n+{\n+\tif (stack->top == 0)\n+\t\treturn NULL;\n+\treturn bucket_stack_pop_unsafe(stack);\n+}\n+\n+static int\n+bucket_enqueue_single(struct bucket_data *bd, void *obj)\n+{\n+\tint rc = 0;\n+\tuintptr_t addr = (uintptr_t)obj;\n+\tstruct bucket_header *hdr;\n+\tunsigned int lcore_id = rte_lcore_id();\n+\n+\taddr &= bd->bucket_page_mask;\n+\thdr = (struct bucket_header *)addr;\n+\n+\tif (likely(hdr->lcore_id == lcore_id)) {\n+\t\tif (hdr->fill_cnt < bd->obj_per_bucket - 1) {\n+\t\t\thdr->fill_cnt++;\n+\t\t} else {\n+\t\t\thdr->fill_cnt = 0;\n+\t\t\t/* Stack is big enough to put all buckets */\n+\t\t\tbucket_stack_push(bd->buckets[lcore_id], hdr);\n+\t\t}\n+\t} else if (hdr->lcore_id != LCORE_ID_ANY) {\n+\t\tstruct rte_ring *adopt_ring =\n+\t\t\tbd->adoption_buffer_rings[hdr->lcore_id];\n+\n+\t\trc = rte_ring_enqueue(adopt_ring, obj);\n+\t\t/* Ring is big enough to put all objects */\n+\t\tRTE_ASSERT(rc == 0);\n+\t} else if (hdr->fill_cnt < bd->obj_per_bucket - 1) {\n+\t\thdr->fill_cnt++;\n+\t} else {\n+\t\thdr->fill_cnt = 0;\n+\t\trc = rte_ring_enqueue(bd->shared_bucket_ring, hdr);\n+\t\t/* Ring is big enough to put all buckets */\n+\t\tRTE_ASSERT(rc == 0);\n+\t}\n+\n+\treturn rc;\n+}\n+\n+static int\n+bucket_enqueue(struct rte_mempool *mp, void * const *obj_table,\n+\t       unsigned int n)\n+{\n+\tstruct bucket_data *bd = mp->pool_data;\n+\tunsigned int i;\n+\tint rc = 0;\n+\n+\tfor (i = 0; i < n; i++) {\n+\t\trc = bucket_enqueue_single(bd, obj_table[i]);\n+\t\tRTE_ASSERT(rc == 0);\n+\t}\n+\treturn rc;\n+}\n+\n+static void **\n+bucket_fill_obj_table(const struct bucket_data *bd, void **pstart,\n+\t\t      void **obj_table, unsigned int n)\n+{\n+\tunsigned int i;\n+\tuint8_t *objptr = *pstart;\n+\n+\tfor (objptr += bd->header_size, i = 0; i < n;\n+\t     i++, objptr += bd->total_elt_size)\n+\t\t*obj_table++ = objptr;\n+\t*pstart = objptr;\n+\treturn obj_table;\n+}\n+\n+static int\n+bucket_dequeue_orphans(struct bucket_data *bd, void **obj_table,\n+\t\t       unsigned int n_orphans)\n+{\n+\tunsigned int i;\n+\tint rc;\n+\tuint8_t *objptr;\n+\n+\trc = rte_ring_dequeue_bulk(bd->shared_orphan_ring, obj_table,\n+\t\t\t\t   n_orphans, NULL);\n+\tif (unlikely(rc != (int)n_orphans)) {\n+\t\tstruct bucket_header *hdr;\n+\n+\t\tobjptr = bucket_stack_pop(bd->buckets[rte_lcore_id()]);\n+\t\thdr = (struct bucket_header *)objptr;\n+\n+\t\tif (objptr == NULL) {\n+\t\t\trc = rte_ring_dequeue(bd->shared_bucket_ring,\n+\t\t\t\t\t      (void **)&objptr);\n+\t\t\tif (rc != 0) {\n+\t\t\t\trte_errno = ENOBUFS;\n+\t\t\t\treturn -rte_errno;\n+\t\t\t}\n+\t\t\thdr = (struct bucket_header *)objptr;\n+\t\t\thdr->lcore_id = rte_lcore_id();\n+\t\t}\n+\t\thdr->fill_cnt = 0;\n+\t\tbucket_fill_obj_table(bd, (void **)&objptr, obj_table,\n+\t\t\t\t      n_orphans);\n+\t\tfor (i = n_orphans; i < bd->obj_per_bucket; i++,\n+\t\t\t     objptr += bd->total_elt_size) {\n+\t\t\trc = rte_ring_enqueue(bd->shared_orphan_ring,\n+\t\t\t\t\t      objptr);\n+\t\t\tif (rc != 0) {\n+\t\t\t\tRTE_ASSERT(0);\n+\t\t\t\trte_errno = -rc;\n+\t\t\t\treturn rc;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+bucket_dequeue_buckets(struct bucket_data *bd, void **obj_table,\n+\t\t       unsigned int n_buckets)\n+{\n+\tstruct bucket_stack *cur_stack = bd->buckets[rte_lcore_id()];\n+\tunsigned int n_buckets_from_stack = RTE_MIN(n_buckets, cur_stack->top);\n+\tvoid **obj_table_base = obj_table;\n+\n+\tn_buckets -= n_buckets_from_stack;\n+\twhile (n_buckets_from_stack-- > 0) {\n+\t\tvoid *obj = bucket_stack_pop_unsafe(cur_stack);\n+\n+\t\tobj_table = bucket_fill_obj_table(bd, &obj, obj_table,\n+\t\t\t\t\t\t  bd->obj_per_bucket);\n+\t}\n+\twhile (n_buckets-- > 0) {\n+\t\tstruct bucket_header *hdr;\n+\n+\t\tif (unlikely(rte_ring_dequeue(bd->shared_bucket_ring,\n+\t\t\t\t\t      (void **)&hdr) != 0)) {\n+\t\t\t/*\n+\t\t\t * Return the already-dequeued buffers\n+\t\t\t * back to the mempool\n+\t\t\t */\n+\t\t\tbucket_enqueue(bd->pool, obj_table_base,\n+\t\t\t\t       obj_table - obj_table_base);\n+\t\t\trte_errno = ENOBUFS;\n+\t\t\treturn -rte_errno;\n+\t\t}\n+\t\thdr->lcore_id = rte_lcore_id();\n+\t\tobj_table = bucket_fill_obj_table(bd, (void **)&hdr,\n+\t\t\t\t\t\t  obj_table,\n+\t\t\t\t\t\t  bd->obj_per_bucket);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+bucket_adopt_orphans(struct bucket_data *bd)\n+{\n+\tint rc = 0;\n+\tstruct rte_ring *adopt_ring =\n+\t\tbd->adoption_buffer_rings[rte_lcore_id()];\n+\n+\tif (unlikely(!rte_ring_empty(adopt_ring))) {\n+\t\tvoid *orphan;\n+\n+\t\twhile (rte_ring_sc_dequeue(adopt_ring, &orphan) == 0) {\n+\t\t\trc = bucket_enqueue_single(bd, orphan);\n+\t\t\tRTE_ASSERT(rc == 0);\n+\t\t}\n+\t}\n+\treturn rc;\n+}\n+\n+static int\n+bucket_dequeue(struct rte_mempool *mp, void **obj_table, unsigned int n)\n+{\n+\tstruct bucket_data *bd = mp->pool_data;\n+\tunsigned int n_buckets = n / bd->obj_per_bucket;\n+\tunsigned int n_orphans = n - n_buckets * bd->obj_per_bucket;\n+\tint rc = 0;\n+\n+\tbucket_adopt_orphans(bd);\n+\n+\tif (unlikely(n_orphans > 0)) {\n+\t\trc = bucket_dequeue_orphans(bd, obj_table +\n+\t\t\t\t\t    (n_buckets * bd->obj_per_bucket),\n+\t\t\t\t\t    n_orphans);\n+\t\tif (rc != 0)\n+\t\t\treturn rc;\n+\t}\n+\n+\tif (likely(n_buckets > 0)) {\n+\t\trc = bucket_dequeue_buckets(bd, obj_table, n_buckets);\n+\t\tif (unlikely(rc != 0) && n_orphans > 0) {\n+\t\t\trte_ring_enqueue_bulk(bd->shared_orphan_ring,\n+\t\t\t\t\t      obj_table + (n_buckets *\n+\t\t\t\t\t\t\t   bd->obj_per_bucket),\n+\t\t\t\t\t      n_orphans, NULL);\n+\t\t}\n+\t}\n+\n+\treturn rc;\n+}\n+\n+static void\n+count_underfilled_buckets(struct rte_mempool *mp,\n+\t\t\t  void *opaque,\n+\t\t\t  struct rte_mempool_memhdr *memhdr,\n+\t\t\t  __rte_unused unsigned int mem_idx)\n+{\n+\tunsigned int *pcount = opaque;\n+\tconst struct bucket_data *bd = mp->pool_data;\n+\tunsigned int bucket_page_sz =\n+\t\t(unsigned int)(~bd->bucket_page_mask + 1);\n+\tuintptr_t align;\n+\tuint8_t *iter;\n+\n+\talign = (uintptr_t)RTE_PTR_ALIGN_CEIL(memhdr->addr, bucket_page_sz) -\n+\t\t(uintptr_t)memhdr->addr;\n+\n+\tfor (iter = (uint8_t *)memhdr->addr + align;\n+\t     iter < (uint8_t *)memhdr->addr + memhdr->len;\n+\t     iter += bucket_page_sz) {\n+\t\tstruct bucket_header *hdr = (struct bucket_header *)iter;\n+\n+\t\t*pcount += hdr->fill_cnt;\n+\t}\n+}\n+\n+static unsigned int\n+bucket_get_count(const struct rte_mempool *mp)\n+{\n+\tconst struct bucket_data *bd = mp->pool_data;\n+\tunsigned int count =\n+\t\tbd->obj_per_bucket * rte_ring_count(bd->shared_bucket_ring) +\n+\t\trte_ring_count(bd->shared_orphan_ring);\n+\tunsigned int i;\n+\n+\tfor (i = 0; i < RTE_MAX_LCORE; i++) {\n+\t\tif (!rte_lcore_is_enabled(i))\n+\t\t\tcontinue;\n+\t\tcount += bd->obj_per_bucket * bd->buckets[i]->top +\n+\t\t\trte_ring_count(bd->adoption_buffer_rings[i]);\n+\t}\n+\n+\trte_mempool_mem_iter((struct rte_mempool *)(uintptr_t)mp,\n+\t\t\t     count_underfilled_buckets, &count);\n+\n+\treturn count;\n+}\n+\n+static int\n+bucket_alloc(struct rte_mempool *mp)\n+{\n+\tint rg_flags = 0;\n+\tint rc = 0;\n+\tchar rg_name[RTE_RING_NAMESIZE];\n+\tstruct bucket_data *bd;\n+\tunsigned int i;\n+\tunsigned int bucket_header_size;\n+\n+\tbd = rte_zmalloc_socket(\"bucket_pool\", sizeof(*bd),\n+\t\t\t\tRTE_CACHE_LINE_SIZE, mp->socket_id);\n+\tif (bd == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto no_mem_for_data;\n+\t}\n+\tbd->pool = mp;\n+\tif (mp->flags & MEMPOOL_F_NO_CACHE_ALIGN)\n+\t\tbucket_header_size = sizeof(struct bucket_header);\n+\telse\n+\t\tbucket_header_size = RTE_CACHE_LINE_SIZE;\n+\tRTE_BUILD_BUG_ON(sizeof(struct bucket_header) > RTE_CACHE_LINE_SIZE);\n+\tbd->header_size = mp->header_size + bucket_header_size;\n+\tbd->total_elt_size = mp->header_size + mp->elt_size + mp->trailer_size;\n+\tbd->bucket_mem_size = RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB * 1024;\n+\tbd->obj_per_bucket = (bd->bucket_mem_size - bucket_header_size) /\n+\t\tbd->total_elt_size;\n+\tbd->bucket_page_mask = ~(rte_align64pow2(bd->bucket_mem_size) - 1);\n+\n+\tif (mp->flags & MEMPOOL_F_SP_PUT)\n+\t\trg_flags |= RING_F_SP_ENQ;\n+\tif (mp->flags & MEMPOOL_F_SC_GET)\n+\t\trg_flags |= RING_F_SC_DEQ;\n+\n+\tfor (i = 0; i < RTE_MAX_LCORE; i++) {\n+\t\tif (!rte_lcore_is_enabled(i))\n+\t\t\tcontinue;\n+\t\tbd->buckets[i] =\n+\t\t\tbucket_stack_create(mp, mp->size / bd->obj_per_bucket);\n+\t\tif (bd->buckets[i] == NULL) {\n+\t\t\trc = -ENOMEM;\n+\t\t\tgoto no_mem_for_stacks;\n+\t\t}\n+\t\trc = snprintf(rg_name, sizeof(rg_name),\n+\t\t\t      RTE_MEMPOOL_MZ_FORMAT \".a%u\", mp->name, i);\n+\t\tif (rc < 0 || rc >= (int)sizeof(rg_name)) {\n+\t\t\trc = -ENAMETOOLONG;\n+\t\t\tgoto no_mem_for_stacks;\n+\t\t}\n+\t\tbd->adoption_buffer_rings[i] =\n+\t\t\trte_ring_create(rg_name, rte_align32pow2(mp->size + 1),\n+\t\t\t\t\tmp->socket_id,\n+\t\t\t\t\trg_flags | RING_F_SC_DEQ);\n+\t\tif (bd->adoption_buffer_rings[i] == NULL) {\n+\t\t\trc = -rte_errno;\n+\t\t\tgoto no_mem_for_stacks;\n+\t\t}\n+\t}\n+\n+\trc = snprintf(rg_name, sizeof(rg_name),\n+\t\t      RTE_MEMPOOL_MZ_FORMAT \".0\", mp->name);\n+\tif (rc < 0 || rc >= (int)sizeof(rg_name)) {\n+\t\trc = -ENAMETOOLONG;\n+\t\tgoto invalid_shared_orphan_ring;\n+\t}\n+\tbd->shared_orphan_ring =\n+\t\trte_ring_create(rg_name, rte_align32pow2(mp->size + 1),\n+\t\t\t\tmp->socket_id, rg_flags);\n+\tif (bd->shared_orphan_ring == NULL) {\n+\t\trc = -rte_errno;\n+\t\tgoto cannot_create_shared_orphan_ring;\n+\t}\n+\n+\trc = snprintf(rg_name, sizeof(rg_name),\n+\t\t       RTE_MEMPOOL_MZ_FORMAT \".1\", mp->name);\n+\tif (rc < 0 || rc >= (int)sizeof(rg_name)) {\n+\t\trc = -ENAMETOOLONG;\n+\t\tgoto invalid_shared_bucket_ring;\n+\t}\n+\tbd->shared_bucket_ring =\n+\t\trte_ring_create(rg_name,\n+\t\t\t\trte_align32pow2((mp->size + 1) /\n+\t\t\t\t\t\tbd->obj_per_bucket),\n+\t\t\t\tmp->socket_id, rg_flags);\n+\tif (bd->shared_bucket_ring == NULL) {\n+\t\trc = -rte_errno;\n+\t\tgoto cannot_create_shared_bucket_ring;\n+\t}\n+\n+\tmp->pool_data = bd;\n+\n+\treturn 0;\n+\n+cannot_create_shared_bucket_ring:\n+invalid_shared_bucket_ring:\n+\trte_ring_free(bd->shared_orphan_ring);\n+cannot_create_shared_orphan_ring:\n+invalid_shared_orphan_ring:\n+no_mem_for_stacks:\n+\tfor (i = 0; i < RTE_MAX_LCORE; i++) {\n+\t\trte_free(bd->buckets[i]);\n+\t\trte_ring_free(bd->adoption_buffer_rings[i]);\n+\t}\n+\trte_free(bd);\n+no_mem_for_data:\n+\trte_errno = -rc;\n+\treturn rc;\n+}\n+\n+static void\n+bucket_free(struct rte_mempool *mp)\n+{\n+\tunsigned int i;\n+\tstruct bucket_data *bd = mp->pool_data;\n+\n+\tif (bd == NULL)\n+\t\treturn;\n+\n+\tfor (i = 0; i < RTE_MAX_LCORE; i++) {\n+\t\trte_free(bd->buckets[i]);\n+\t\trte_ring_free(bd->adoption_buffer_rings[i]);\n+\t}\n+\n+\trte_ring_free(bd->shared_orphan_ring);\n+\trte_ring_free(bd->shared_bucket_ring);\n+\n+\trte_free(bd);\n+}\n+\n+static ssize_t\n+bucket_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,\n+\t\t     __rte_unused uint32_t pg_shift, size_t *min_total_elt_size,\n+\t\t     size_t *align)\n+{\n+\tstruct bucket_data *bd = mp->pool_data;\n+\tunsigned int bucket_page_sz;\n+\n+\tif (bd == NULL)\n+\t\treturn -EINVAL;\n+\n+\tbucket_page_sz = rte_align32pow2(bd->bucket_mem_size);\n+\t*align = bucket_page_sz;\n+\t*min_total_elt_size = bucket_page_sz;\n+\t/*\n+\t * Each bucket occupies its own block aligned to\n+\t * bucket_page_sz, so the required amount of memory is\n+\t * a multiple of bucket_page_sz.\n+\t * We also need extra space for a bucket header\n+\t */\n+\treturn ((obj_num + bd->obj_per_bucket - 1) /\n+\t\tbd->obj_per_bucket) * bucket_page_sz;\n+}\n+\n+static int\n+bucket_populate(struct rte_mempool *mp, unsigned int max_objs,\n+\t\tvoid *vaddr, rte_iova_t iova, size_t len,\n+\t\trte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)\n+{\n+\tstruct bucket_data *bd = mp->pool_data;\n+\tunsigned int bucket_page_sz;\n+\tunsigned int bucket_header_sz;\n+\tunsigned int n_objs;\n+\tuintptr_t align;\n+\tuint8_t *iter;\n+\tint rc;\n+\n+\tif (bd == NULL)\n+\t\treturn -EINVAL;\n+\n+\tbucket_page_sz = rte_align32pow2(bd->bucket_mem_size);\n+\talign = RTE_PTR_ALIGN_CEIL((uintptr_t)vaddr, bucket_page_sz) -\n+\t\t(uintptr_t)vaddr;\n+\n+\tbucket_header_sz = bd->header_size - mp->header_size;\n+\tif (iova != RTE_BAD_IOVA)\n+\t\tiova += align + bucket_header_sz;\n+\n+\tfor (iter = (uint8_t *)vaddr + align, n_objs = 0;\n+\t     iter < (uint8_t *)vaddr + len && n_objs < max_objs;\n+\t     iter += bucket_page_sz) {\n+\t\tstruct bucket_header *hdr = (struct bucket_header *)iter;\n+\t\tunsigned int chunk_len = bd->bucket_mem_size;\n+\n+\t\tif ((size_t)(iter - (uint8_t *)vaddr) + chunk_len > len)\n+\t\t\tchunk_len = len - (iter - (uint8_t *)vaddr);\n+\t\tif (chunk_len <= bucket_header_sz)\n+\t\t\tbreak;\n+\t\tchunk_len -= bucket_header_sz;\n+\n+\t\thdr->fill_cnt = 0;\n+\t\thdr->lcore_id = LCORE_ID_ANY;\n+\t\trc = rte_mempool_op_populate_default(mp,\n+\t\t\t\t\t\t     RTE_MIN(bd->obj_per_bucket,\n+\t\t\t\t\t\t\t     max_objs - n_objs),\n+\t\t\t\t\t\t     iter + bucket_header_sz,\n+\t\t\t\t\t\t     iova, chunk_len,\n+\t\t\t\t\t\t     obj_cb, obj_cb_arg);\n+\t\tif (rc < 0)\n+\t\t\treturn rc;\n+\t\tn_objs += rc;\n+\t\tif (iova != RTE_BAD_IOVA)\n+\t\t\tiova += bucket_page_sz;\n+\t}\n+\n+\treturn n_objs;\n+}\n+\n+static const struct rte_mempool_ops ops_bucket = {\n+\t.name = \"bucket\",\n+\t.alloc = bucket_alloc,\n+\t.free = bucket_free,\n+\t.enqueue = bucket_enqueue,\n+\t.dequeue = bucket_dequeue,\n+\t.get_count = bucket_get_count,\n+\t.calc_mem_size = bucket_calc_mem_size,\n+\t.populate = bucket_populate,\n+};\n+\n+\n+MEMPOOL_REGISTER_OPS(ops_bucket);\ndiff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map\nnew file mode 100644\nindex 000000000..9b9ab1a4c\n--- /dev/null\n+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map\n@@ -0,0 +1,4 @@\n+DPDK_18.05 {\n+\n+\tlocal: *;\n+};\ndiff --git a/mk/rte.app.mk b/mk/rte.app.mk\nindex 1584800ce..29a2a6095 100644\n--- a/mk/rte.app.mk\n+++ b/mk/rte.app.mk\n@@ -125,6 +125,7 @@ endif\n ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),n)\n # plugins (link only if static libraries)\n \n+_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_BUCKET) += -lrte_mempool_bucket\n _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_STACK)  += -lrte_mempool_stack\n ifeq ($(CONFIG_RTE_LIBRTE_DPAA_BUS),y)\n _LDLIBS-$(CONFIG_RTE_LIBRTE_DPAA_MEMPOOL)   += -lrte_mempool_dpaa\n",
    "prefixes": [
        "dpdk-dev",
        "v3",
        "1/6"
    ]
}