get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2240/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2240,
    "url": "https://patches.dpdk.org/api/patches/2240/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1421080446-19249-8-git-send-email-sergio.gonzalez.monroy@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1421080446-19249-8-git-send-email-sergio.gonzalez.monroy@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1421080446-19249-8-git-send-email-sergio.gonzalez.monroy@intel.com",
    "date": "2015-01-12T16:34:00",
    "name": "[dpdk-dev,RFC,07/13] core: move librte_ring to core subdir",
    "commit_ref": null,
    "pull_url": null,
    "state": "rfc",
    "archived": true,
    "hash": "4419b1ad8be7b70ae723c1d8e5744ce608a0c4bd",
    "submitter": {
        "id": 73,
        "url": "https://patches.dpdk.org/api/people/73/?format=api",
        "name": "Sergio Gonzalez Monroy",
        "email": "sergio.gonzalez.monroy@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1421080446-19249-8-git-send-email-sergio.gonzalez.monroy@intel.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/2240/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/2240/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 491CF5AE6;\n\tMon, 12 Jan 2015 17:34:41 +0100 (CET)",
            "from mga14.intel.com (mga14.intel.com [192.55.52.115])\n\tby dpdk.org (Postfix) with ESMTP id 08FE15AC8\n\tfor <dev@dpdk.org>; Mon, 12 Jan 2015 17:34:11 +0100 (CET)",
            "from orsmga001.jf.intel.com ([10.7.209.18])\n\tby fmsmga103.fm.intel.com with ESMTP; 12 Jan 2015 08:29:03 -0800",
            "from irvmail001.ir.intel.com ([163.33.26.43])\n\tby orsmga001.jf.intel.com with ESMTP; 12 Jan 2015 08:34:09 -0800",
            "from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com\n\t[10.237.217.46])\n\tby irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id\n\tt0CGY7s0022025 for <dev@dpdk.org>; Mon, 12 Jan 2015 16:34:07 GMT",
            "from sivswdev02.ir.intel.com (localhost [127.0.0.1])\n\tby sivswdev02.ir.intel.com with ESMTP id t0CGY79x019337\n\tfor <dev@dpdk.org>; Mon, 12 Jan 2015 16:34:07 GMT",
            "(from smonroy@localhost)\n\tby sivswdev02.ir.intel.com with  id t0CGY7pw019333\n\tfor dev@dpdk.org; Mon, 12 Jan 2015 16:34:07 GMT"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.07,744,1413270000\"; d=\"scan'208\";a=\"636101475\"",
        "From": "Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>",
        "To": "dev@dpdk.org",
        "Date": "Mon, 12 Jan 2015 16:34:00 +0000",
        "Message-Id": "<1421080446-19249-8-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "X-Mailer": "git-send-email 1.8.5.4",
        "In-Reply-To": "<1421080446-19249-1-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "References": "<1421080446-19249-1-git-send-email-sergio.gonzalez.monroy@intel.com>",
        "Subject": "[dpdk-dev] [PATCH RFC 07/13] core: move librte_ring to core subdir",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "This is equivalent to:\n\ngit mv lib/librte_ring lib/core\n\nSigned-off-by: Sergio Gonzalez Monroy <sergio.gonzalez.monroy@intel.com>\n---\n lib/core/librte_ring/Makefile   |   48 ++\n lib/core/librte_ring/rte_ring.c |  338 +++++++++++\n lib/core/librte_ring/rte_ring.h | 1214 +++++++++++++++++++++++++++++++++++++++\n lib/librte_ring/Makefile        |   48 --\n lib/librte_ring/rte_ring.c      |  338 -----------\n lib/librte_ring/rte_ring.h      | 1214 ---------------------------------------\n 6 files changed, 1600 insertions(+), 1600 deletions(-)\n create mode 100644 lib/core/librte_ring/Makefile\n create mode 100644 lib/core/librte_ring/rte_ring.c\n create mode 100644 lib/core/librte_ring/rte_ring.h\n delete mode 100644 lib/librte_ring/Makefile\n delete mode 100644 lib/librte_ring/rte_ring.c\n delete mode 100644 lib/librte_ring/rte_ring.h",
    "diff": "diff --git a/lib/core/librte_ring/Makefile b/lib/core/librte_ring/Makefile\nnew file mode 100644\nindex 0000000..2380a43\n--- /dev/null\n+++ b/lib/core/librte_ring/Makefile\n@@ -0,0 +1,48 @@\n+#   BSD LICENSE\n+#\n+#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+#   All rights reserved.\n+#\n+#   Redistribution and use in source and binary forms, with or without\n+#   modification, are permitted provided that the following conditions\n+#   are met:\n+#\n+#     * Redistributions of source code must retain the above copyright\n+#       notice, this list of conditions and the following disclaimer.\n+#     * Redistributions in binary form must reproduce the above copyright\n+#       notice, this list of conditions and the following disclaimer in\n+#       the documentation and/or other materials provided with the\n+#       distribution.\n+#     * Neither the name of Intel Corporation nor the names of its\n+#       contributors may be used to endorse or promote products derived\n+#       from this software without specific prior written permission.\n+#\n+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+include $(RTE_SDK)/mk/rte.vars.mk\n+\n+# library name\n+LIB = librte_ring.a\n+\n+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3\n+\n+# all source are stored in SRCS-y\n+SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c\n+\n+# install includes\n+SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h\n+\n+# this lib needs eal and rte_malloc\n+DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/librte_eal lib/librte_malloc\n+\n+include $(RTE_SDK)/mk/rte.lib.mk\ndiff --git a/lib/core/librte_ring/rte_ring.c b/lib/core/librte_ring/rte_ring.c\nnew file mode 100644\nindex 0000000..f5899c4\n--- /dev/null\n+++ b/lib/core/librte_ring/rte_ring.c\n@@ -0,0 +1,338 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+ *   All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+/*\n+ * Derived from FreeBSD's bufring.c\n+ *\n+ **************************************************************************\n+ *\n+ * Copyright (c) 2007,2008 Kip Macy kmacy@freebsd.org\n+ * All rights reserved.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ *\n+ * 1. Redistributions of source code must retain the above copyright notice,\n+ *    this list of conditions and the following disclaimer.\n+ *\n+ * 2. The name of Kip Macy nor the names of other\n+ *    contributors may be used to endorse or promote products derived from\n+ *    this software without specific prior written permission.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ *\n+ ***************************************************************************/\n+\n+#include <stdio.h>\n+#include <stdarg.h>\n+#include <string.h>\n+#include <stdint.h>\n+#include <inttypes.h>\n+#include <errno.h>\n+#include <sys/queue.h>\n+\n+#include <rte_common.h>\n+#include <rte_log.h>\n+#include <rte_memory.h>\n+#include <rte_memzone.h>\n+#include <rte_malloc.h>\n+#include <rte_launch.h>\n+#include <rte_tailq.h>\n+#include <rte_eal.h>\n+#include <rte_eal_memconfig.h>\n+#include <rte_atomic.h>\n+#include <rte_per_lcore.h>\n+#include <rte_lcore.h>\n+#include <rte_branch_prediction.h>\n+#include <rte_errno.h>\n+#include <rte_string_fns.h>\n+#include <rte_spinlock.h>\n+\n+#include \"rte_ring.h\"\n+\n+TAILQ_HEAD(rte_ring_list, rte_tailq_entry);\n+\n+/* true if x is a power of 2 */\n+#define POWEROF2(x) ((((x)-1) & (x)) == 0)\n+\n+/* return the size of memory occupied by a ring */\n+ssize_t\n+rte_ring_get_memsize(unsigned count)\n+{\n+\tssize_t sz;\n+\n+\t/* count must be a power of 2 */\n+\tif ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) {\n+\t\tRTE_LOG(ERR, RING,\n+\t\t\t\"Requested size is invalid, must be power of 2, and \"\n+\t\t\t\"do not exceed the size limit %u\\n\", RTE_RING_SZ_MASK);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tsz = sizeof(struct rte_ring) + count * sizeof(void *);\n+\tsz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);\n+\treturn sz;\n+}\n+\n+int\n+rte_ring_init(struct rte_ring *r, const char *name, unsigned count,\n+\tunsigned flags)\n+{\n+\t/* compilation-time checks */\n+\tRTE_BUILD_BUG_ON((sizeof(struct rte_ring) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#ifdef RTE_RING_SPLIT_PROD_CONS\n+\tRTE_BUILD_BUG_ON((offsetof(struct rte_ring, cons) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#endif\n+\tRTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#ifdef RTE_LIBRTE_RING_DEBUG\n+\tRTE_BUILD_BUG_ON((sizeof(struct rte_ring_debug_stats) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+\tRTE_BUILD_BUG_ON((offsetof(struct rte_ring, stats) &\n+\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n+#endif\n+\n+\t/* init the ring structure */\n+\tmemset(r, 0, sizeof(*r));\n+\tsnprintf(r->name, sizeof(r->name), \"%s\", name);\n+\tr->flags = flags;\n+\tr->prod.watermark = count;\n+\tr->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);\n+\tr->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);\n+\tr->prod.size = r->cons.size = count;\n+\tr->prod.mask = r->cons.mask = count-1;\n+\tr->prod.head = r->cons.head = 0;\n+\tr->prod.tail = r->cons.tail = 0;\n+\n+\treturn 0;\n+}\n+\n+/* create the ring */\n+struct rte_ring *\n+rte_ring_create(const char *name, unsigned count, int socket_id,\n+\t\tunsigned flags)\n+{\n+\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n+\tstruct rte_ring *r;\n+\tstruct rte_tailq_entry *te;\n+\tconst struct rte_memzone *mz;\n+\tssize_t ring_size;\n+\tint mz_flags = 0;\n+\tstruct rte_ring_list* ring_list = NULL;\n+\n+\t/* check that we have an initialised tail queue */\n+\tif ((ring_list =\n+\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn NULL;\n+\t}\n+\n+\tring_size = rte_ring_get_memsize(count);\n+\tif (ring_size < 0) {\n+\t\trte_errno = ring_size;\n+\t\treturn NULL;\n+\t}\n+\n+\tte = rte_zmalloc(\"RING_TAILQ_ENTRY\", sizeof(*te), 0);\n+\tif (te == NULL) {\n+\t\tRTE_LOG(ERR, RING, \"Cannot reserve memory for tailq\\n\");\n+\t\trte_errno = ENOMEM;\n+\t\treturn NULL;\n+\t}\n+\n+\tsnprintf(mz_name, sizeof(mz_name), \"%s%s\", RTE_RING_MZ_PREFIX, name);\n+\n+\trte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);\n+\n+\t/* reserve a memory zone for this ring. If we can't get rte_config or\n+\t * we are secondary process, the memzone_reserve function will set\n+\t * rte_errno for us appropriately - hence no check in this this function */\n+\tmz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);\n+\tif (mz != NULL) {\n+\t\tr = mz->addr;\n+\t\t/* no need to check return value here, we already checked the\n+\t\t * arguments above */\n+\t\trte_ring_init(r, name, count, flags);\n+\n+\t\tte->data = (void *) r;\n+\n+\t\tTAILQ_INSERT_TAIL(ring_list, te, next);\n+\t} else {\n+\t\tr = NULL;\n+\t\tRTE_LOG(ERR, RING, \"Cannot reserve memory\\n\");\n+\t\trte_free(te);\n+\t}\n+\trte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);\n+\n+\treturn r;\n+}\n+\n+/*\n+ * change the high water mark. If *count* is 0, water marking is\n+ * disabled\n+ */\n+int\n+rte_ring_set_water_mark(struct rte_ring *r, unsigned count)\n+{\n+\tif (count >= r->prod.size)\n+\t\treturn -EINVAL;\n+\n+\t/* if count is 0, disable the watermarking */\n+\tif (count == 0)\n+\t\tcount = r->prod.size;\n+\n+\tr->prod.watermark = count;\n+\treturn 0;\n+}\n+\n+/* dump the status of the ring on the console */\n+void\n+rte_ring_dump(FILE *f, const struct rte_ring *r)\n+{\n+#ifdef RTE_LIBRTE_RING_DEBUG\n+\tstruct rte_ring_debug_stats sum;\n+\tunsigned lcore_id;\n+#endif\n+\n+\tfprintf(f, \"ring <%s>@%p\\n\", r->name, r);\n+\tfprintf(f, \"  flags=%x\\n\", r->flags);\n+\tfprintf(f, \"  size=%\"PRIu32\"\\n\", r->prod.size);\n+\tfprintf(f, \"  ct=%\"PRIu32\"\\n\", r->cons.tail);\n+\tfprintf(f, \"  ch=%\"PRIu32\"\\n\", r->cons.head);\n+\tfprintf(f, \"  pt=%\"PRIu32\"\\n\", r->prod.tail);\n+\tfprintf(f, \"  ph=%\"PRIu32\"\\n\", r->prod.head);\n+\tfprintf(f, \"  used=%u\\n\", rte_ring_count(r));\n+\tfprintf(f, \"  avail=%u\\n\", rte_ring_free_count(r));\n+\tif (r->prod.watermark == r->prod.size)\n+\t\tfprintf(f, \"  watermark=0\\n\");\n+\telse\n+\t\tfprintf(f, \"  watermark=%\"PRIu32\"\\n\", r->prod.watermark);\n+\n+\t/* sum and dump statistics */\n+#ifdef RTE_LIBRTE_RING_DEBUG\n+\tmemset(&sum, 0, sizeof(sum));\n+\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n+\t\tsum.enq_success_bulk += r->stats[lcore_id].enq_success_bulk;\n+\t\tsum.enq_success_objs += r->stats[lcore_id].enq_success_objs;\n+\t\tsum.enq_quota_bulk += r->stats[lcore_id].enq_quota_bulk;\n+\t\tsum.enq_quota_objs += r->stats[lcore_id].enq_quota_objs;\n+\t\tsum.enq_fail_bulk += r->stats[lcore_id].enq_fail_bulk;\n+\t\tsum.enq_fail_objs += r->stats[lcore_id].enq_fail_objs;\n+\t\tsum.deq_success_bulk += r->stats[lcore_id].deq_success_bulk;\n+\t\tsum.deq_success_objs += r->stats[lcore_id].deq_success_objs;\n+\t\tsum.deq_fail_bulk += r->stats[lcore_id].deq_fail_bulk;\n+\t\tsum.deq_fail_objs += r->stats[lcore_id].deq_fail_objs;\n+\t}\n+\tfprintf(f, \"  size=%\"PRIu32\"\\n\", r->prod.size);\n+\tfprintf(f, \"  enq_success_bulk=%\"PRIu64\"\\n\", sum.enq_success_bulk);\n+\tfprintf(f, \"  enq_success_objs=%\"PRIu64\"\\n\", sum.enq_success_objs);\n+\tfprintf(f, \"  enq_quota_bulk=%\"PRIu64\"\\n\", sum.enq_quota_bulk);\n+\tfprintf(f, \"  enq_quota_objs=%\"PRIu64\"\\n\", sum.enq_quota_objs);\n+\tfprintf(f, \"  enq_fail_bulk=%\"PRIu64\"\\n\", sum.enq_fail_bulk);\n+\tfprintf(f, \"  enq_fail_objs=%\"PRIu64\"\\n\", sum.enq_fail_objs);\n+\tfprintf(f, \"  deq_success_bulk=%\"PRIu64\"\\n\", sum.deq_success_bulk);\n+\tfprintf(f, \"  deq_success_objs=%\"PRIu64\"\\n\", sum.deq_success_objs);\n+\tfprintf(f, \"  deq_fail_bulk=%\"PRIu64\"\\n\", sum.deq_fail_bulk);\n+\tfprintf(f, \"  deq_fail_objs=%\"PRIu64\"\\n\", sum.deq_fail_objs);\n+#else\n+\tfprintf(f, \"  no statistics available\\n\");\n+#endif\n+}\n+\n+/* dump the status of all rings on the console */\n+void\n+rte_ring_list_dump(FILE *f)\n+{\n+\tconst struct rte_tailq_entry *te;\n+\tstruct rte_ring_list *ring_list;\n+\n+\t/* check that we have an initialised tail queue */\n+\tif ((ring_list =\n+\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn;\n+\t}\n+\n+\trte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);\n+\n+\tTAILQ_FOREACH(te, ring_list, next) {\n+\t\trte_ring_dump(f, (struct rte_ring *) te->data);\n+\t}\n+\n+\trte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);\n+}\n+\n+/* search a ring from its name */\n+struct rte_ring *\n+rte_ring_lookup(const char *name)\n+{\n+\tstruct rte_tailq_entry *te;\n+\tstruct rte_ring *r = NULL;\n+\tstruct rte_ring_list *ring_list;\n+\n+\t/* check that we have an initialized tail queue */\n+\tif ((ring_list =\n+\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {\n+\t\trte_errno = E_RTE_NO_TAILQ;\n+\t\treturn NULL;\n+\t}\n+\n+\trte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);\n+\n+\tTAILQ_FOREACH(te, ring_list, next) {\n+\t\tr = (struct rte_ring *) te->data;\n+\t\tif (strncmp(name, r->name, RTE_RING_NAMESIZE) == 0)\n+\t\t\tbreak;\n+\t}\n+\n+\trte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);\n+\n+\tif (te == NULL) {\n+\t\trte_errno = ENOENT;\n+\t\treturn NULL;\n+\t}\n+\n+\treturn r;\n+}\ndiff --git a/lib/core/librte_ring/rte_ring.h b/lib/core/librte_ring/rte_ring.h\nnew file mode 100644\nindex 0000000..7cd5f2d\n--- /dev/null\n+++ b/lib/core/librte_ring/rte_ring.h\n@@ -0,0 +1,1214 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+ *   All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+/*\n+ * Derived from FreeBSD's bufring.h\n+ *\n+ **************************************************************************\n+ *\n+ * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org\n+ * All rights reserved.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ *\n+ * 1. Redistributions of source code must retain the above copyright notice,\n+ *    this list of conditions and the following disclaimer.\n+ *\n+ * 2. The name of Kip Macy nor the names of other\n+ *    contributors may be used to endorse or promote products derived from\n+ *    this software without specific prior written permission.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ *\n+ ***************************************************************************/\n+\n+#ifndef _RTE_RING_H_\n+#define _RTE_RING_H_\n+\n+/**\n+ * @file\n+ * RTE Ring\n+ *\n+ * The Ring Manager is a fixed-size queue, implemented as a table of\n+ * pointers. Head and tail pointers are modified atomically, allowing\n+ * concurrent access to it. It has the following features:\n+ *\n+ * - FIFO (First In First Out)\n+ * - Maximum size is fixed; the pointers are stored in a table.\n+ * - Lockless implementation.\n+ * - Multi- or single-consumer dequeue.\n+ * - Multi- or single-producer enqueue.\n+ * - Bulk dequeue.\n+ * - Bulk enqueue.\n+ *\n+ * Note: the ring implementation is not preemptable. A lcore must not\n+ * be interrupted by another task that uses the same ring.\n+ *\n+ */\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#include <stdio.h>\n+#include <stdint.h>\n+#include <sys/queue.h>\n+#include <errno.h>\n+#include <rte_common.h>\n+#include <rte_memory.h>\n+#include <rte_lcore.h>\n+#include <rte_atomic.h>\n+#include <rte_branch_prediction.h>\n+\n+enum rte_ring_queue_behavior {\n+\tRTE_RING_QUEUE_FIXED = 0, /* Enq/Deq a fixed number of items from a ring */\n+\tRTE_RING_QUEUE_VARIABLE   /* Enq/Deq as many items a possible from ring */\n+};\n+\n+#ifdef RTE_LIBRTE_RING_DEBUG\n+/**\n+ * A structure that stores the ring statistics (per-lcore).\n+ */\n+struct rte_ring_debug_stats {\n+\tuint64_t enq_success_bulk; /**< Successful enqueues number. */\n+\tuint64_t enq_success_objs; /**< Objects successfully enqueued. */\n+\tuint64_t enq_quota_bulk;   /**< Successful enqueues above watermark. */\n+\tuint64_t enq_quota_objs;   /**< Objects enqueued above watermark. */\n+\tuint64_t enq_fail_bulk;    /**< Failed enqueues number. */\n+\tuint64_t enq_fail_objs;    /**< Objects that failed to be enqueued. */\n+\tuint64_t deq_success_bulk; /**< Successful dequeues number. */\n+\tuint64_t deq_success_objs; /**< Objects successfully dequeued. */\n+\tuint64_t deq_fail_bulk;    /**< Failed dequeues number. */\n+\tuint64_t deq_fail_objs;    /**< Objects that failed to be dequeued. */\n+} __rte_cache_aligned;\n+#endif\n+\n+#define RTE_RING_NAMESIZE 32 /**< The maximum length of a ring name. */\n+#define RTE_RING_MZ_PREFIX \"RG_\"\n+\n+/**\n+ * An RTE ring structure.\n+ *\n+ * The producer and the consumer have a head and a tail index. The particularity\n+ * of these index is that they are not between 0 and size(ring). These indexes\n+ * are between 0 and 2^32, and we mask their value when we access the ring[]\n+ * field. Thanks to this assumption, we can do subtractions between 2 index\n+ * values in a modulo-32bit base: that's why the overflow of the indexes is not\n+ * a problem.\n+ */\n+struct rte_ring {\n+\tchar name[RTE_RING_NAMESIZE];    /**< Name of the ring. */\n+\tint flags;                       /**< Flags supplied at creation. */\n+\n+\t/** Ring producer status. */\n+\tstruct prod {\n+\t\tuint32_t watermark;      /**< Maximum items before EDQUOT. */\n+\t\tuint32_t sp_enqueue;     /**< True, if single producer. */\n+\t\tuint32_t size;           /**< Size of ring. */\n+\t\tuint32_t mask;           /**< Mask (size-1) of ring. */\n+\t\tvolatile uint32_t head;  /**< Producer head. */\n+\t\tvolatile uint32_t tail;  /**< Producer tail. */\n+\t} prod __rte_cache_aligned;\n+\n+\t/** Ring consumer status. */\n+\tstruct cons {\n+\t\tuint32_t sc_dequeue;     /**< True, if single consumer. */\n+\t\tuint32_t size;           /**< Size of the ring. */\n+\t\tuint32_t mask;           /**< Mask (size-1) of ring. */\n+\t\tvolatile uint32_t head;  /**< Consumer head. */\n+\t\tvolatile uint32_t tail;  /**< Consumer tail. */\n+#ifdef RTE_RING_SPLIT_PROD_CONS\n+\t} cons __rte_cache_aligned;\n+#else\n+\t} cons;\n+#endif\n+\n+#ifdef RTE_LIBRTE_RING_DEBUG\n+\tstruct rte_ring_debug_stats stats[RTE_MAX_LCORE];\n+#endif\n+\n+\tvoid * ring[0] __rte_cache_aligned; /**< Memory space of ring starts here.\n+\t                                     * not volatile so need to be careful\n+\t                                     * about compiler re-ordering */\n+};\n+\n+#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is \"single-producer\". */\n+#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is \"single-consumer\". */\n+#define RTE_RING_QUOT_EXCEED (1 << 31)  /**< Quota exceed for burst ops */\n+#define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */\n+\n+/**\n+ * @internal When debug is enabled, store ring statistics.\n+ * @param r\n+ *   A pointer to the ring.\n+ * @param name\n+ *   The name of the statistics field to increment in the ring.\n+ * @param n\n+ *   The number to add to the object-oriented statistics.\n+ */\n+#ifdef RTE_LIBRTE_RING_DEBUG\n+#define __RING_STAT_ADD(r, name, n) do {\t\t\\\n+\t\tunsigned __lcore_id = rte_lcore_id();\t\\\n+\t\tr->stats[__lcore_id].name##_objs += n;\t\\\n+\t\tr->stats[__lcore_id].name##_bulk += 1;\t\\\n+\t} while(0)\n+#else\n+#define __RING_STAT_ADD(r, name, n) do {} while(0)\n+#endif\n+\n+/**\n+ * Calculate the memory size needed for a ring\n+ *\n+ * This function returns the number of bytes needed for a ring, given\n+ * the number of elements in it. This value is the sum of the size of\n+ * the structure rte_ring and the size of the memory needed by the\n+ * objects pointers. The value is aligned to a cache line size.\n+ *\n+ * @param count\n+ *   The number of elements in the ring (must be a power of 2).\n+ * @return\n+ *   - The memory size needed for the ring on success.\n+ *   - -EINVAL if count is not a power of 2.\n+ */\n+ssize_t rte_ring_get_memsize(unsigned count);\n+\n+/**\n+ * Initialize a ring structure.\n+ *\n+ * Initialize a ring structure in memory pointed by \"r\". The size of the\n+ * memory area must be large enough to store the ring structure and the\n+ * object table. It is advised to use rte_ring_get_memsize() to get the\n+ * appropriate size.\n+ *\n+ * The ring size is set to *count*, which must be a power of two. Water\n+ * marking is disabled by default. The real usable ring size is\n+ * *count-1* instead of *count* to differentiate a free ring from an\n+ * empty ring.\n+ *\n+ * The ring is not added in RTE_TAILQ_RING global list. Indeed, the\n+ * memory given by the caller may not be shareable among dpdk\n+ * processes.\n+ *\n+ * @param r\n+ *   The pointer to the ring structure followed by the objects table.\n+ * @param name\n+ *   The name of the ring.\n+ * @param count\n+ *   The number of elements in the ring (must be a power of 2).\n+ * @param flags\n+ *   An OR of the following:\n+ *    - RING_F_SP_ENQ: If this flag is set, the default behavior when\n+ *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``\n+ *      is \"single-producer\". Otherwise, it is \"multi-producers\".\n+ *    - RING_F_SC_DEQ: If this flag is set, the default behavior when\n+ *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``\n+ *      is \"single-consumer\". Otherwise, it is \"multi-consumers\".\n+ * @return\n+ *   0 on success, or a negative value on error.\n+ */\n+int rte_ring_init(struct rte_ring *r, const char *name, unsigned count,\n+\tunsigned flags);\n+\n+/**\n+ * Create a new ring named *name* in memory.\n+ *\n+ * This function uses ``memzone_reserve()`` to allocate memory. Then it\n+ * calls rte_ring_init() to initialize an empty ring.\n+ *\n+ * The new ring size is set to *count*, which must be a power of\n+ * two. Water marking is disabled by default. The real usable ring size\n+ * is *count-1* instead of *count* to differentiate a free ring from an\n+ * empty ring.\n+ *\n+ * The ring is added in RTE_TAILQ_RING list.\n+ *\n+ * @param name\n+ *   The name of the ring.\n+ * @param count\n+ *   The size of the ring (must be a power of 2).\n+ * @param socket_id\n+ *   The *socket_id* argument is the socket identifier in case of\n+ *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n+ *   constraint for the reserved zone.\n+ * @param flags\n+ *   An OR of the following:\n+ *    - RING_F_SP_ENQ: If this flag is set, the default behavior when\n+ *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``\n+ *      is \"single-producer\". Otherwise, it is \"multi-producers\".\n+ *    - RING_F_SC_DEQ: If this flag is set, the default behavior when\n+ *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``\n+ *      is \"single-consumer\". Otherwise, it is \"multi-consumers\".\n+ * @return\n+ *   On success, the pointer to the new allocated ring. NULL on error with\n+ *    rte_errno set appropriately. Possible errno values include:\n+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n+ *    - E_RTE_SECONDARY - function was called from a secondary process instance\n+ *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring list\n+ *    - EINVAL - count provided is not a power of 2\n+ *    - ENOSPC - the maximum number of memzones has already been allocated\n+ *    - EEXIST - a memzone with the same name already exists\n+ *    - ENOMEM - no appropriate memory area found in which to create memzone\n+ */\n+struct rte_ring *rte_ring_create(const char *name, unsigned count,\n+\t\t\t\t int socket_id, unsigned flags);\n+\n+/**\n+ * Change the high water mark.\n+ *\n+ * If *count* is 0, water marking is disabled. Otherwise, it is set to the\n+ * *count* value. The *count* value must be greater than 0 and less\n+ * than the ring size.\n+ *\n+ * This function can be called at any time (not necessarily at\n+ * initialization).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param count\n+ *   The new water mark value.\n+ * @return\n+ *   - 0: Success; water mark changed.\n+ *   - -EINVAL: Invalid water mark value.\n+ */\n+int rte_ring_set_water_mark(struct rte_ring *r, unsigned count);\n+\n+/**\n+ * Dump the status of the ring to the console.\n+ *\n+ * @param f\n+ *   A pointer to a file for output\n+ * @param r\n+ *   A pointer to the ring structure.\n+ */\n+void rte_ring_dump(FILE *f, const struct rte_ring *r);\n+\n+/* the actual enqueue of pointers on the ring.\n+ * Placed here since identical code needed in both\n+ * single and multi producer enqueue functions */\n+#define ENQUEUE_PTRS() do { \\\n+\tconst uint32_t size = r->prod.size; \\\n+\tuint32_t idx = prod_head & mask; \\\n+\tif (likely(idx + n < size)) { \\\n+\t\tfor (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \\\n+\t\t\tr->ring[idx] = obj_table[i]; \\\n+\t\t\tr->ring[idx+1] = obj_table[i+1]; \\\n+\t\t\tr->ring[idx+2] = obj_table[i+2]; \\\n+\t\t\tr->ring[idx+3] = obj_table[i+3]; \\\n+\t\t} \\\n+\t\tswitch (n & 0x3) { \\\n+\t\t\tcase 3: r->ring[idx++] = obj_table[i++]; \\\n+\t\t\tcase 2: r->ring[idx++] = obj_table[i++]; \\\n+\t\t\tcase 1: r->ring[idx++] = obj_table[i++]; \\\n+\t\t} \\\n+\t} else { \\\n+\t\tfor (i = 0; idx < size; i++, idx++)\\\n+\t\t\tr->ring[idx] = obj_table[i]; \\\n+\t\tfor (idx = 0; i < n; i++, idx++) \\\n+\t\t\tr->ring[idx] = obj_table[i]; \\\n+\t} \\\n+} while(0)\n+\n+/* the actual copy of pointers on the ring to obj_table.\n+ * Placed here since identical code needed in both\n+ * single and multi consumer dequeue functions */\n+#define DEQUEUE_PTRS() do { \\\n+\tuint32_t idx = cons_head & mask; \\\n+\tconst uint32_t size = r->cons.size; \\\n+\tif (likely(idx + n < size)) { \\\n+\t\tfor (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\\\n+\t\t\tobj_table[i] = r->ring[idx]; \\\n+\t\t\tobj_table[i+1] = r->ring[idx+1]; \\\n+\t\t\tobj_table[i+2] = r->ring[idx+2]; \\\n+\t\t\tobj_table[i+3] = r->ring[idx+3]; \\\n+\t\t} \\\n+\t\tswitch (n & 0x3) { \\\n+\t\t\tcase 3: obj_table[i++] = r->ring[idx++]; \\\n+\t\t\tcase 2: obj_table[i++] = r->ring[idx++]; \\\n+\t\t\tcase 1: obj_table[i++] = r->ring[idx++]; \\\n+\t\t} \\\n+\t} else { \\\n+\t\tfor (i = 0; idx < size; i++, idx++) \\\n+\t\t\tobj_table[i] = r->ring[idx]; \\\n+\t\tfor (idx = 0; i < n; i++, idx++) \\\n+\t\t\tobj_table[i] = r->ring[idx]; \\\n+\t} \\\n+} while (0)\n+\n+/**\n+ * @internal Enqueue several objects on the ring (multi-producers safe).\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * producer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @param behavior\n+ *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring\n+ *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring\n+ * @return\n+ *   Depend on the behavior value\n+ *   if behavior = RTE_RING_QUEUE_FIXED\n+ *   - 0: Success; objects enqueue.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.\n+ *   if behavior = RTE_RING_QUEUE_VARIABLE\n+ *   - n: Actual number of objects enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+__rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,\n+\t\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n+{\n+\tuint32_t prod_head, prod_next;\n+\tuint32_t cons_tail, free_entries;\n+\tconst unsigned max = n;\n+\tint success;\n+\tunsigned i;\n+\tuint32_t mask = r->prod.mask;\n+\tint ret;\n+\n+\t/* move prod.head atomically */\n+\tdo {\n+\t\t/* Reset n to the initial burst count */\n+\t\tn = max;\n+\n+\t\tprod_head = r->prod.head;\n+\t\tcons_tail = r->cons.tail;\n+\t\t/* The subtraction is done between two unsigned 32bits value\n+\t\t * (the result is always modulo 32 bits even if we have\n+\t\t * prod_head > cons_tail). So 'free_entries' is always between 0\n+\t\t * and size(ring)-1. */\n+\t\tfree_entries = (mask + cons_tail - prod_head);\n+\n+\t\t/* check that we have enough room in ring */\n+\t\tif (unlikely(n > free_entries)) {\n+\t\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n+\t\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n+\t\t\t\treturn -ENOBUFS;\n+\t\t\t}\n+\t\t\telse {\n+\t\t\t\t/* No free entry available */\n+\t\t\t\tif (unlikely(free_entries == 0)) {\n+\t\t\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n+\t\t\t\t\treturn 0;\n+\t\t\t\t}\n+\n+\t\t\t\tn = free_entries;\n+\t\t\t}\n+\t\t}\n+\n+\t\tprod_next = prod_head + n;\n+\t\tsuccess = rte_atomic32_cmpset(&r->prod.head, prod_head,\n+\t\t\t\t\t      prod_next);\n+\t} while (unlikely(success == 0));\n+\n+\t/* write entries in ring */\n+\tENQUEUE_PTRS();\n+\trte_compiler_barrier();\n+\n+\t/* if we exceed the watermark */\n+\tif (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {\n+\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :\n+\t\t\t\t(int)(n | RTE_RING_QUOT_EXCEED);\n+\t\t__RING_STAT_ADD(r, enq_quota, n);\n+\t}\n+\telse {\n+\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;\n+\t\t__RING_STAT_ADD(r, enq_success, n);\n+\t}\n+\n+\t/*\n+\t * If there are other enqueues in progress that preceded us,\n+\t * we need to wait for them to complete\n+\t */\n+\twhile (unlikely(r->prod.tail != prod_head))\n+\t\trte_pause();\n+\n+\tr->prod.tail = prod_next;\n+\treturn ret;\n+}\n+\n+/**\n+ * @internal Enqueue several objects on a ring (NOT multi-producers safe).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @param behavior\n+ *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring\n+ *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring\n+ * @return\n+ *   Depend on the behavior value\n+ *   if behavior = RTE_RING_QUEUE_FIXED\n+ *   - 0: Success; objects enqueue.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.\n+ *   if behavior = RTE_RING_QUEUE_VARIABLE\n+ *   - n: Actual number of objects enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+__rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,\n+\t\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n+{\n+\tuint32_t prod_head, cons_tail;\n+\tuint32_t prod_next, free_entries;\n+\tunsigned i;\n+\tuint32_t mask = r->prod.mask;\n+\tint ret;\n+\n+\tprod_head = r->prod.head;\n+\tcons_tail = r->cons.tail;\n+\t/* The subtraction is done between two unsigned 32bits value\n+\t * (the result is always modulo 32 bits even if we have\n+\t * prod_head > cons_tail). So 'free_entries' is always between 0\n+\t * and size(ring)-1. */\n+\tfree_entries = mask + cons_tail - prod_head;\n+\n+\t/* check that we have enough room in ring */\n+\tif (unlikely(n > free_entries)) {\n+\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n+\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n+\t\t\treturn -ENOBUFS;\n+\t\t}\n+\t\telse {\n+\t\t\t/* No free entry available */\n+\t\t\tif (unlikely(free_entries == 0)) {\n+\t\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n+\t\t\t\treturn 0;\n+\t\t\t}\n+\n+\t\t\tn = free_entries;\n+\t\t}\n+\t}\n+\n+\tprod_next = prod_head + n;\n+\tr->prod.head = prod_next;\n+\n+\t/* write entries in ring */\n+\tENQUEUE_PTRS();\n+\trte_compiler_barrier();\n+\n+\t/* if we exceed the watermark */\n+\tif (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {\n+\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :\n+\t\t\t(int)(n | RTE_RING_QUOT_EXCEED);\n+\t\t__RING_STAT_ADD(r, enq_quota, n);\n+\t}\n+\telse {\n+\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;\n+\t\t__RING_STAT_ADD(r, enq_success, n);\n+\t}\n+\n+\tr->prod.tail = prod_next;\n+\treturn ret;\n+}\n+\n+/**\n+ * @internal Dequeue several objects from a ring (multi-consumers safe). When\n+ * the request objects are more than the available objects, only dequeue the\n+ * actual number of objects\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * consumer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @param behavior\n+ *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring\n+ *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring\n+ * @return\n+ *   Depend on the behavior value\n+ *   if behavior = RTE_RING_QUEUE_FIXED\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n+ *     dequeued.\n+ *   if behavior = RTE_RING_QUEUE_VARIABLE\n+ *   - n: Actual number of objects dequeued.\n+ */\n+\n+static inline int __attribute__((always_inline))\n+__rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,\n+\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n+{\n+\tuint32_t cons_head, prod_tail;\n+\tuint32_t cons_next, entries;\n+\tconst unsigned max = n;\n+\tint success;\n+\tunsigned i;\n+\tuint32_t mask = r->prod.mask;\n+\n+\t/* move cons.head atomically */\n+\tdo {\n+\t\t/* Restore n as it may change every loop */\n+\t\tn = max;\n+\n+\t\tcons_head = r->cons.head;\n+\t\tprod_tail = r->prod.tail;\n+\t\t/* The subtraction is done between two unsigned 32bits value\n+\t\t * (the result is always modulo 32 bits even if we have\n+\t\t * cons_head > prod_tail). So 'entries' is always between 0\n+\t\t * and size(ring)-1. */\n+\t\tentries = (prod_tail - cons_head);\n+\n+\t\t/* Set the actual entries for dequeue */\n+\t\tif (n > entries) {\n+\t\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n+\t\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n+\t\t\t\treturn -ENOENT;\n+\t\t\t}\n+\t\t\telse {\n+\t\t\t\tif (unlikely(entries == 0)){\n+\t\t\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n+\t\t\t\t\treturn 0;\n+\t\t\t\t}\n+\n+\t\t\t\tn = entries;\n+\t\t\t}\n+\t\t}\n+\n+\t\tcons_next = cons_head + n;\n+\t\tsuccess = rte_atomic32_cmpset(&r->cons.head, cons_head,\n+\t\t\t\t\t      cons_next);\n+\t} while (unlikely(success == 0));\n+\n+\t/* copy in table */\n+\tDEQUEUE_PTRS();\n+\trte_compiler_barrier();\n+\n+\t/*\n+\t * If there are other dequeues in progress that preceded us,\n+\t * we need to wait for them to complete\n+\t */\n+\twhile (unlikely(r->cons.tail != cons_head))\n+\t\trte_pause();\n+\n+\t__RING_STAT_ADD(r, deq_success, n);\n+\tr->cons.tail = cons_next;\n+\n+\treturn behavior == RTE_RING_QUEUE_FIXED ? 0 : n;\n+}\n+\n+/**\n+ * @internal Dequeue several objects from a ring (NOT multi-consumers safe).\n+ * When the request objects are more than the available objects, only dequeue\n+ * the actual number of objects\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @param behavior\n+ *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring\n+ *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring\n+ * @return\n+ *   Depend on the behavior value\n+ *   if behavior = RTE_RING_QUEUE_FIXED\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n+ *     dequeued.\n+ *   if behavior = RTE_RING_QUEUE_VARIABLE\n+ *   - n: Actual number of objects dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+__rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,\n+\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n+{\n+\tuint32_t cons_head, prod_tail;\n+\tuint32_t cons_next, entries;\n+\tunsigned i;\n+\tuint32_t mask = r->prod.mask;\n+\n+\tcons_head = r->cons.head;\n+\tprod_tail = r->prod.tail;\n+\t/* The subtraction is done between two unsigned 32bits value\n+\t * (the result is always modulo 32 bits even if we have\n+\t * cons_head > prod_tail). So 'entries' is always between 0\n+\t * and size(ring)-1. */\n+\tentries = prod_tail - cons_head;\n+\n+\tif (n > entries) {\n+\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n+\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n+\t\t\treturn -ENOENT;\n+\t\t}\n+\t\telse {\n+\t\t\tif (unlikely(entries == 0)){\n+\t\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n+\t\t\t\treturn 0;\n+\t\t\t}\n+\n+\t\t\tn = entries;\n+\t\t}\n+\t}\n+\n+\tcons_next = cons_head + n;\n+\tr->cons.head = cons_next;\n+\n+\t/* copy in table */\n+\tDEQUEUE_PTRS();\n+\trte_compiler_barrier();\n+\n+\t__RING_STAT_ADD(r, deq_success, n);\n+\tr->cons.tail = cons_next;\n+\treturn behavior == RTE_RING_QUEUE_FIXED ? 0 : n;\n+}\n+\n+/**\n+ * Enqueue several objects on the ring (multi-producers safe).\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * producer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @return\n+ *   - 0: Success; objects enqueue.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,\n+\t\t\t unsigned n)\n+{\n+\treturn __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n+}\n+\n+/**\n+ * Enqueue several objects on a ring (NOT multi-producers safe).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @return\n+ *   - 0: Success; objects enqueued.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,\n+\t\t\t unsigned n)\n+{\n+\treturn __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n+}\n+\n+/**\n+ * Enqueue several objects on a ring.\n+ *\n+ * This function calls the multi-producer or the single-producer\n+ * version depending on the default behavior that was specified at\n+ * ring creation time (see flags).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @return\n+ *   - 0: Success; objects enqueued.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,\n+\t\t      unsigned n)\n+{\n+\tif (r->prod.sp_enqueue)\n+\t\treturn rte_ring_sp_enqueue_bulk(r, obj_table, n);\n+\telse\n+\t\treturn rte_ring_mp_enqueue_bulk(r, obj_table, n);\n+}\n+\n+/**\n+ * Enqueue one object on a ring (multi-producers safe).\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * producer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj\n+ *   A pointer to the object to be added.\n+ * @return\n+ *   - 0: Success; objects enqueued.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_mp_enqueue(struct rte_ring *r, void *obj)\n+{\n+\treturn rte_ring_mp_enqueue_bulk(r, &obj, 1);\n+}\n+\n+/**\n+ * Enqueue one object on a ring (NOT multi-producers safe).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj\n+ *   A pointer to the object to be added.\n+ * @return\n+ *   - 0: Success; objects enqueued.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_sp_enqueue(struct rte_ring *r, void *obj)\n+{\n+\treturn rte_ring_sp_enqueue_bulk(r, &obj, 1);\n+}\n+\n+/**\n+ * Enqueue one object on a ring.\n+ *\n+ * This function calls the multi-producer or the single-producer\n+ * version, depending on the default behaviour that was specified at\n+ * ring creation time (see flags).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj\n+ *   A pointer to the object to be added.\n+ * @return\n+ *   - 0: Success; objects enqueued.\n+ *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n+ *     high water mark is exceeded.\n+ *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_enqueue(struct rte_ring *r, void *obj)\n+{\n+\tif (r->prod.sp_enqueue)\n+\t\treturn rte_ring_sp_enqueue(r, obj);\n+\telse\n+\t\treturn rte_ring_mp_enqueue(r, obj);\n+}\n+\n+/**\n+ * Dequeue several objects from a ring (multi-consumers safe).\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * consumer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @return\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n+ *     dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)\n+{\n+\treturn __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n+}\n+\n+/**\n+ * Dequeue several objects from a ring (NOT multi-consumers safe).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table,\n+ *   must be strictly positive.\n+ * @return\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n+ *     dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)\n+{\n+\treturn __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n+}\n+\n+/**\n+ * Dequeue several objects from a ring.\n+ *\n+ * This function calls the multi-consumers or the single-consumer\n+ * version, depending on the default behaviour that was specified at\n+ * ring creation time (see flags).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @return\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue, no object is\n+ *     dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)\n+{\n+\tif (r->cons.sc_dequeue)\n+\t\treturn rte_ring_sc_dequeue_bulk(r, obj_table, n);\n+\telse\n+\t\treturn rte_ring_mc_dequeue_bulk(r, obj_table, n);\n+}\n+\n+/**\n+ * Dequeue one object from a ring (multi-consumers safe).\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * consumer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_p\n+ *   A pointer to a void * pointer (object) that will be filled.\n+ * @return\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n+ *     dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)\n+{\n+\treturn rte_ring_mc_dequeue_bulk(r, obj_p, 1);\n+}\n+\n+/**\n+ * Dequeue one object from a ring (NOT multi-consumers safe).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_p\n+ *   A pointer to a void * pointer (object) that will be filled.\n+ * @return\n+ *   - 0: Success; objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue, no object is\n+ *     dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)\n+{\n+\treturn rte_ring_sc_dequeue_bulk(r, obj_p, 1);\n+}\n+\n+/**\n+ * Dequeue one object from a ring.\n+ *\n+ * This function calls the multi-consumers or the single-consumer\n+ * version depending on the default behaviour that was specified at\n+ * ring creation time (see flags).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_p\n+ *   A pointer to a void * pointer (object) that will be filled.\n+ * @return\n+ *   - 0: Success, objects dequeued.\n+ *   - -ENOENT: Not enough entries in the ring to dequeue, no object is\n+ *     dequeued.\n+ */\n+static inline int __attribute__((always_inline))\n+rte_ring_dequeue(struct rte_ring *r, void **obj_p)\n+{\n+\tif (r->cons.sc_dequeue)\n+\t\treturn rte_ring_sc_dequeue(r, obj_p);\n+\telse\n+\t\treturn rte_ring_mc_dequeue(r, obj_p);\n+}\n+\n+/**\n+ * Test if a ring is full.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @return\n+ *   - 1: The ring is full.\n+ *   - 0: The ring is not full.\n+ */\n+static inline int\n+rte_ring_full(const struct rte_ring *r)\n+{\n+\tuint32_t prod_tail = r->prod.tail;\n+\tuint32_t cons_tail = r->cons.tail;\n+\treturn (((cons_tail - prod_tail - 1) & r->prod.mask) == 0);\n+}\n+\n+/**\n+ * Test if a ring is empty.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @return\n+ *   - 1: The ring is empty.\n+ *   - 0: The ring is not empty.\n+ */\n+static inline int\n+rte_ring_empty(const struct rte_ring *r)\n+{\n+\tuint32_t prod_tail = r->prod.tail;\n+\tuint32_t cons_tail = r->cons.tail;\n+\treturn !!(cons_tail == prod_tail);\n+}\n+\n+/**\n+ * Return the number of entries in a ring.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @return\n+ *   The number of entries in the ring.\n+ */\n+static inline unsigned\n+rte_ring_count(const struct rte_ring *r)\n+{\n+\tuint32_t prod_tail = r->prod.tail;\n+\tuint32_t cons_tail = r->cons.tail;\n+\treturn ((prod_tail - cons_tail) & r->prod.mask);\n+}\n+\n+/**\n+ * Return the number of free entries in a ring.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @return\n+ *   The number of free entries in the ring.\n+ */\n+static inline unsigned\n+rte_ring_free_count(const struct rte_ring *r)\n+{\n+\tuint32_t prod_tail = r->prod.tail;\n+\tuint32_t cons_tail = r->cons.tail;\n+\treturn ((cons_tail - prod_tail - 1) & r->prod.mask);\n+}\n+\n+/**\n+ * Dump the status of all rings on the console\n+ *\n+ * @param f\n+ *   A pointer to a file for output\n+ */\n+void rte_ring_list_dump(FILE *f);\n+\n+/**\n+ * Search a ring from its name\n+ *\n+ * @param name\n+ *   The name of the ring.\n+ * @return\n+ *   The pointer to the ring matching the name, or NULL if not found,\n+ *   with rte_errno set appropriately. Possible rte_errno values include:\n+ *    - ENOENT - required entry not available to return.\n+ */\n+struct rte_ring *rte_ring_lookup(const char *name);\n+\n+/**\n+ * Enqueue several objects on the ring (multi-producers safe).\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * producer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @return\n+ *   - n: Actual number of objects enqueued.\n+ */\n+static inline unsigned __attribute__((always_inline))\n+rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,\n+\t\t\t unsigned n)\n+{\n+\treturn __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n+}\n+\n+/**\n+ * Enqueue several objects on a ring (NOT multi-producers safe).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @return\n+ *   - n: Actual number of objects enqueued.\n+ */\n+static inline unsigned __attribute__((always_inline))\n+rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,\n+\t\t\t unsigned n)\n+{\n+\treturn __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n+}\n+\n+/**\n+ * Enqueue several objects on a ring.\n+ *\n+ * This function calls the multi-producer or the single-producer\n+ * version depending on the default behavior that was specified at\n+ * ring creation time (see flags).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects).\n+ * @param n\n+ *   The number of objects to add in the ring from the obj_table.\n+ * @return\n+ *   - n: Actual number of objects enqueued.\n+ */\n+static inline unsigned __attribute__((always_inline))\n+rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,\n+\t\t      unsigned n)\n+{\n+\tif (r->prod.sp_enqueue)\n+\t\treturn rte_ring_sp_enqueue_burst(r, obj_table, n);\n+\telse\n+\t\treturn rte_ring_mp_enqueue_burst(r, obj_table, n);\n+}\n+\n+/**\n+ * Dequeue several objects from a ring (multi-consumers safe). When the request\n+ * objects are more than the available objects, only dequeue the actual number\n+ * of objects\n+ *\n+ * This function uses a \"compare and set\" instruction to move the\n+ * consumer index atomically.\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @return\n+ *   - n: Actual number of objects dequeued, 0 if ring is empty\n+ */\n+static inline unsigned __attribute__((always_inline))\n+rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)\n+{\n+\treturn __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n+}\n+\n+/**\n+ * Dequeue several objects from a ring (NOT multi-consumers safe).When the\n+ * request objects are more than the available objects, only dequeue the\n+ * actual number of objects\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @return\n+ *   - n: Actual number of objects dequeued, 0 if ring is empty\n+ */\n+static inline unsigned __attribute__((always_inline))\n+rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)\n+{\n+\treturn __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n+}\n+\n+/**\n+ * Dequeue multiple objects from a ring up to a maximum number.\n+ *\n+ * This function calls the multi-consumers or the single-consumer\n+ * version, depending on the default behaviour that was specified at\n+ * ring creation time (see flags).\n+ *\n+ * @param r\n+ *   A pointer to the ring structure.\n+ * @param obj_table\n+ *   A pointer to a table of void * pointers (objects) that will be filled.\n+ * @param n\n+ *   The number of objects to dequeue from the ring to the obj_table.\n+ * @return\n+ *   - Number of objects dequeued\n+ */\n+static inline unsigned __attribute__((always_inline))\n+rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)\n+{\n+\tif (r->cons.sc_dequeue)\n+\t\treturn rte_ring_sc_dequeue_burst(r, obj_table, n);\n+\telse\n+\t\treturn rte_ring_mc_dequeue_burst(r, obj_table, n);\n+}\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_RING_H_ */\ndiff --git a/lib/librte_ring/Makefile b/lib/librte_ring/Makefile\ndeleted file mode 100644\nindex 2380a43..0000000\n--- a/lib/librte_ring/Makefile\n+++ /dev/null\n@@ -1,48 +0,0 @@\n-#   BSD LICENSE\n-#\n-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n-#   All rights reserved.\n-#\n-#   Redistribution and use in source and binary forms, with or without\n-#   modification, are permitted provided that the following conditions\n-#   are met:\n-#\n-#     * Redistributions of source code must retain the above copyright\n-#       notice, this list of conditions and the following disclaimer.\n-#     * Redistributions in binary form must reproduce the above copyright\n-#       notice, this list of conditions and the following disclaimer in\n-#       the documentation and/or other materials provided with the\n-#       distribution.\n-#     * Neither the name of Intel Corporation nor the names of its\n-#       contributors may be used to endorse or promote products derived\n-#       from this software without specific prior written permission.\n-#\n-#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n-#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n-#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n-#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n-#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n-#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n-#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n-#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n-#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n-#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n-#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n-\n-include $(RTE_SDK)/mk/rte.vars.mk\n-\n-# library name\n-LIB = librte_ring.a\n-\n-CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3\n-\n-# all source are stored in SRCS-y\n-SRCS-$(CONFIG_RTE_LIBRTE_RING) := rte_ring.c\n-\n-# install includes\n-SYMLINK-$(CONFIG_RTE_LIBRTE_RING)-include := rte_ring.h\n-\n-# this lib needs eal and rte_malloc\n-DEPDIRS-$(CONFIG_RTE_LIBRTE_RING) += lib/librte_eal lib/librte_malloc\n-\n-include $(RTE_SDK)/mk/rte.lib.mk\ndiff --git a/lib/librte_ring/rte_ring.c b/lib/librte_ring/rte_ring.c\ndeleted file mode 100644\nindex f5899c4..0000000\n--- a/lib/librte_ring/rte_ring.c\n+++ /dev/null\n@@ -1,338 +0,0 @@\n-/*-\n- *   BSD LICENSE\n- *\n- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n- *   All rights reserved.\n- *\n- *   Redistribution and use in source and binary forms, with or without\n- *   modification, are permitted provided that the following conditions\n- *   are met:\n- *\n- *     * Redistributions of source code must retain the above copyright\n- *       notice, this list of conditions and the following disclaimer.\n- *     * Redistributions in binary form must reproduce the above copyright\n- *       notice, this list of conditions and the following disclaimer in\n- *       the documentation and/or other materials provided with the\n- *       distribution.\n- *     * Neither the name of Intel Corporation nor the names of its\n- *       contributors may be used to endorse or promote products derived\n- *       from this software without specific prior written permission.\n- *\n- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n- *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n- */\n-\n-/*\n- * Derived from FreeBSD's bufring.c\n- *\n- **************************************************************************\n- *\n- * Copyright (c) 2007,2008 Kip Macy kmacy@freebsd.org\n- * All rights reserved.\n- *\n- * Redistribution and use in source and binary forms, with or without\n- * modification, are permitted provided that the following conditions are met:\n- *\n- * 1. Redistributions of source code must retain the above copyright notice,\n- *    this list of conditions and the following disclaimer.\n- *\n- * 2. The name of Kip Macy nor the names of other\n- *    contributors may be used to endorse or promote products derived from\n- *    this software without specific prior written permission.\n- *\n- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n- * POSSIBILITY OF SUCH DAMAGE.\n- *\n- ***************************************************************************/\n-\n-#include <stdio.h>\n-#include <stdarg.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <inttypes.h>\n-#include <errno.h>\n-#include <sys/queue.h>\n-\n-#include <rte_common.h>\n-#include <rte_log.h>\n-#include <rte_memory.h>\n-#include <rte_memzone.h>\n-#include <rte_malloc.h>\n-#include <rte_launch.h>\n-#include <rte_tailq.h>\n-#include <rte_eal.h>\n-#include <rte_eal_memconfig.h>\n-#include <rte_atomic.h>\n-#include <rte_per_lcore.h>\n-#include <rte_lcore.h>\n-#include <rte_branch_prediction.h>\n-#include <rte_errno.h>\n-#include <rte_string_fns.h>\n-#include <rte_spinlock.h>\n-\n-#include \"rte_ring.h\"\n-\n-TAILQ_HEAD(rte_ring_list, rte_tailq_entry);\n-\n-/* true if x is a power of 2 */\n-#define POWEROF2(x) ((((x)-1) & (x)) == 0)\n-\n-/* return the size of memory occupied by a ring */\n-ssize_t\n-rte_ring_get_memsize(unsigned count)\n-{\n-\tssize_t sz;\n-\n-\t/* count must be a power of 2 */\n-\tif ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) {\n-\t\tRTE_LOG(ERR, RING,\n-\t\t\t\"Requested size is invalid, must be power of 2, and \"\n-\t\t\t\"do not exceed the size limit %u\\n\", RTE_RING_SZ_MASK);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tsz = sizeof(struct rte_ring) + count * sizeof(void *);\n-\tsz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);\n-\treturn sz;\n-}\n-\n-int\n-rte_ring_init(struct rte_ring *r, const char *name, unsigned count,\n-\tunsigned flags)\n-{\n-\t/* compilation-time checks */\n-\tRTE_BUILD_BUG_ON((sizeof(struct rte_ring) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#ifdef RTE_RING_SPLIT_PROD_CONS\n-\tRTE_BUILD_BUG_ON((offsetof(struct rte_ring, cons) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#endif\n-\tRTE_BUILD_BUG_ON((offsetof(struct rte_ring, prod) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#ifdef RTE_LIBRTE_RING_DEBUG\n-\tRTE_BUILD_BUG_ON((sizeof(struct rte_ring_debug_stats) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-\tRTE_BUILD_BUG_ON((offsetof(struct rte_ring, stats) &\n-\t\t\t  RTE_CACHE_LINE_MASK) != 0);\n-#endif\n-\n-\t/* init the ring structure */\n-\tmemset(r, 0, sizeof(*r));\n-\tsnprintf(r->name, sizeof(r->name), \"%s\", name);\n-\tr->flags = flags;\n-\tr->prod.watermark = count;\n-\tr->prod.sp_enqueue = !!(flags & RING_F_SP_ENQ);\n-\tr->cons.sc_dequeue = !!(flags & RING_F_SC_DEQ);\n-\tr->prod.size = r->cons.size = count;\n-\tr->prod.mask = r->cons.mask = count-1;\n-\tr->prod.head = r->cons.head = 0;\n-\tr->prod.tail = r->cons.tail = 0;\n-\n-\treturn 0;\n-}\n-\n-/* create the ring */\n-struct rte_ring *\n-rte_ring_create(const char *name, unsigned count, int socket_id,\n-\t\tunsigned flags)\n-{\n-\tchar mz_name[RTE_MEMZONE_NAMESIZE];\n-\tstruct rte_ring *r;\n-\tstruct rte_tailq_entry *te;\n-\tconst struct rte_memzone *mz;\n-\tssize_t ring_size;\n-\tint mz_flags = 0;\n-\tstruct rte_ring_list* ring_list = NULL;\n-\n-\t/* check that we have an initialised tail queue */\n-\tif ((ring_list =\n-\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn NULL;\n-\t}\n-\n-\tring_size = rte_ring_get_memsize(count);\n-\tif (ring_size < 0) {\n-\t\trte_errno = ring_size;\n-\t\treturn NULL;\n-\t}\n-\n-\tte = rte_zmalloc(\"RING_TAILQ_ENTRY\", sizeof(*te), 0);\n-\tif (te == NULL) {\n-\t\tRTE_LOG(ERR, RING, \"Cannot reserve memory for tailq\\n\");\n-\t\trte_errno = ENOMEM;\n-\t\treturn NULL;\n-\t}\n-\n-\tsnprintf(mz_name, sizeof(mz_name), \"%s%s\", RTE_RING_MZ_PREFIX, name);\n-\n-\trte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);\n-\n-\t/* reserve a memory zone for this ring. If we can't get rte_config or\n-\t * we are secondary process, the memzone_reserve function will set\n-\t * rte_errno for us appropriately - hence no check in this this function */\n-\tmz = rte_memzone_reserve(mz_name, ring_size, socket_id, mz_flags);\n-\tif (mz != NULL) {\n-\t\tr = mz->addr;\n-\t\t/* no need to check return value here, we already checked the\n-\t\t * arguments above */\n-\t\trte_ring_init(r, name, count, flags);\n-\n-\t\tte->data = (void *) r;\n-\n-\t\tTAILQ_INSERT_TAIL(ring_list, te, next);\n-\t} else {\n-\t\tr = NULL;\n-\t\tRTE_LOG(ERR, RING, \"Cannot reserve memory\\n\");\n-\t\trte_free(te);\n-\t}\n-\trte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);\n-\n-\treturn r;\n-}\n-\n-/*\n- * change the high water mark. If *count* is 0, water marking is\n- * disabled\n- */\n-int\n-rte_ring_set_water_mark(struct rte_ring *r, unsigned count)\n-{\n-\tif (count >= r->prod.size)\n-\t\treturn -EINVAL;\n-\n-\t/* if count is 0, disable the watermarking */\n-\tif (count == 0)\n-\t\tcount = r->prod.size;\n-\n-\tr->prod.watermark = count;\n-\treturn 0;\n-}\n-\n-/* dump the status of the ring on the console */\n-void\n-rte_ring_dump(FILE *f, const struct rte_ring *r)\n-{\n-#ifdef RTE_LIBRTE_RING_DEBUG\n-\tstruct rte_ring_debug_stats sum;\n-\tunsigned lcore_id;\n-#endif\n-\n-\tfprintf(f, \"ring <%s>@%p\\n\", r->name, r);\n-\tfprintf(f, \"  flags=%x\\n\", r->flags);\n-\tfprintf(f, \"  size=%\"PRIu32\"\\n\", r->prod.size);\n-\tfprintf(f, \"  ct=%\"PRIu32\"\\n\", r->cons.tail);\n-\tfprintf(f, \"  ch=%\"PRIu32\"\\n\", r->cons.head);\n-\tfprintf(f, \"  pt=%\"PRIu32\"\\n\", r->prod.tail);\n-\tfprintf(f, \"  ph=%\"PRIu32\"\\n\", r->prod.head);\n-\tfprintf(f, \"  used=%u\\n\", rte_ring_count(r));\n-\tfprintf(f, \"  avail=%u\\n\", rte_ring_free_count(r));\n-\tif (r->prod.watermark == r->prod.size)\n-\t\tfprintf(f, \"  watermark=0\\n\");\n-\telse\n-\t\tfprintf(f, \"  watermark=%\"PRIu32\"\\n\", r->prod.watermark);\n-\n-\t/* sum and dump statistics */\n-#ifdef RTE_LIBRTE_RING_DEBUG\n-\tmemset(&sum, 0, sizeof(sum));\n-\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\tsum.enq_success_bulk += r->stats[lcore_id].enq_success_bulk;\n-\t\tsum.enq_success_objs += r->stats[lcore_id].enq_success_objs;\n-\t\tsum.enq_quota_bulk += r->stats[lcore_id].enq_quota_bulk;\n-\t\tsum.enq_quota_objs += r->stats[lcore_id].enq_quota_objs;\n-\t\tsum.enq_fail_bulk += r->stats[lcore_id].enq_fail_bulk;\n-\t\tsum.enq_fail_objs += r->stats[lcore_id].enq_fail_objs;\n-\t\tsum.deq_success_bulk += r->stats[lcore_id].deq_success_bulk;\n-\t\tsum.deq_success_objs += r->stats[lcore_id].deq_success_objs;\n-\t\tsum.deq_fail_bulk += r->stats[lcore_id].deq_fail_bulk;\n-\t\tsum.deq_fail_objs += r->stats[lcore_id].deq_fail_objs;\n-\t}\n-\tfprintf(f, \"  size=%\"PRIu32\"\\n\", r->prod.size);\n-\tfprintf(f, \"  enq_success_bulk=%\"PRIu64\"\\n\", sum.enq_success_bulk);\n-\tfprintf(f, \"  enq_success_objs=%\"PRIu64\"\\n\", sum.enq_success_objs);\n-\tfprintf(f, \"  enq_quota_bulk=%\"PRIu64\"\\n\", sum.enq_quota_bulk);\n-\tfprintf(f, \"  enq_quota_objs=%\"PRIu64\"\\n\", sum.enq_quota_objs);\n-\tfprintf(f, \"  enq_fail_bulk=%\"PRIu64\"\\n\", sum.enq_fail_bulk);\n-\tfprintf(f, \"  enq_fail_objs=%\"PRIu64\"\\n\", sum.enq_fail_objs);\n-\tfprintf(f, \"  deq_success_bulk=%\"PRIu64\"\\n\", sum.deq_success_bulk);\n-\tfprintf(f, \"  deq_success_objs=%\"PRIu64\"\\n\", sum.deq_success_objs);\n-\tfprintf(f, \"  deq_fail_bulk=%\"PRIu64\"\\n\", sum.deq_fail_bulk);\n-\tfprintf(f, \"  deq_fail_objs=%\"PRIu64\"\\n\", sum.deq_fail_objs);\n-#else\n-\tfprintf(f, \"  no statistics available\\n\");\n-#endif\n-}\n-\n-/* dump the status of all rings on the console */\n-void\n-rte_ring_list_dump(FILE *f)\n-{\n-\tconst struct rte_tailq_entry *te;\n-\tstruct rte_ring_list *ring_list;\n-\n-\t/* check that we have an initialised tail queue */\n-\tif ((ring_list =\n-\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn;\n-\t}\n-\n-\trte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);\n-\n-\tTAILQ_FOREACH(te, ring_list, next) {\n-\t\trte_ring_dump(f, (struct rte_ring *) te->data);\n-\t}\n-\n-\trte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);\n-}\n-\n-/* search a ring from its name */\n-struct rte_ring *\n-rte_ring_lookup(const char *name)\n-{\n-\tstruct rte_tailq_entry *te;\n-\tstruct rte_ring *r = NULL;\n-\tstruct rte_ring_list *ring_list;\n-\n-\t/* check that we have an initialized tail queue */\n-\tif ((ring_list =\n-\t     RTE_TAILQ_LOOKUP_BY_IDX(RTE_TAILQ_RING, rte_ring_list)) == NULL) {\n-\t\trte_errno = E_RTE_NO_TAILQ;\n-\t\treturn NULL;\n-\t}\n-\n-\trte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);\n-\n-\tTAILQ_FOREACH(te, ring_list, next) {\n-\t\tr = (struct rte_ring *) te->data;\n-\t\tif (strncmp(name, r->name, RTE_RING_NAMESIZE) == 0)\n-\t\t\tbreak;\n-\t}\n-\n-\trte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);\n-\n-\tif (te == NULL) {\n-\t\trte_errno = ENOENT;\n-\t\treturn NULL;\n-\t}\n-\n-\treturn r;\n-}\ndiff --git a/lib/librte_ring/rte_ring.h b/lib/librte_ring/rte_ring.h\ndeleted file mode 100644\nindex 7cd5f2d..0000000\n--- a/lib/librte_ring/rte_ring.h\n+++ /dev/null\n@@ -1,1214 +0,0 @@\n-/*-\n- *   BSD LICENSE\n- *\n- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n- *   All rights reserved.\n- *\n- *   Redistribution and use in source and binary forms, with or without\n- *   modification, are permitted provided that the following conditions\n- *   are met:\n- *\n- *     * Redistributions of source code must retain the above copyright\n- *       notice, this list of conditions and the following disclaimer.\n- *     * Redistributions in binary form must reproduce the above copyright\n- *       notice, this list of conditions and the following disclaimer in\n- *       the documentation and/or other materials provided with the\n- *       distribution.\n- *     * Neither the name of Intel Corporation nor the names of its\n- *       contributors may be used to endorse or promote products derived\n- *       from this software without specific prior written permission.\n- *\n- *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n- *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n- *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n- *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n- *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n- *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n- *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n- *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n- *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n- *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n- *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n- */\n-\n-/*\n- * Derived from FreeBSD's bufring.h\n- *\n- **************************************************************************\n- *\n- * Copyright (c) 2007-2009 Kip Macy kmacy@freebsd.org\n- * All rights reserved.\n- *\n- * Redistribution and use in source and binary forms, with or without\n- * modification, are permitted provided that the following conditions are met:\n- *\n- * 1. Redistributions of source code must retain the above copyright notice,\n- *    this list of conditions and the following disclaimer.\n- *\n- * 2. The name of Kip Macy nor the names of other\n- *    contributors may be used to endorse or promote products derived from\n- *    this software without specific prior written permission.\n- *\n- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n- * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n- * POSSIBILITY OF SUCH DAMAGE.\n- *\n- ***************************************************************************/\n-\n-#ifndef _RTE_RING_H_\n-#define _RTE_RING_H_\n-\n-/**\n- * @file\n- * RTE Ring\n- *\n- * The Ring Manager is a fixed-size queue, implemented as a table of\n- * pointers. Head and tail pointers are modified atomically, allowing\n- * concurrent access to it. It has the following features:\n- *\n- * - FIFO (First In First Out)\n- * - Maximum size is fixed; the pointers are stored in a table.\n- * - Lockless implementation.\n- * - Multi- or single-consumer dequeue.\n- * - Multi- or single-producer enqueue.\n- * - Bulk dequeue.\n- * - Bulk enqueue.\n- *\n- * Note: the ring implementation is not preemptable. A lcore must not\n- * be interrupted by another task that uses the same ring.\n- *\n- */\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <stdio.h>\n-#include <stdint.h>\n-#include <sys/queue.h>\n-#include <errno.h>\n-#include <rte_common.h>\n-#include <rte_memory.h>\n-#include <rte_lcore.h>\n-#include <rte_atomic.h>\n-#include <rte_branch_prediction.h>\n-\n-enum rte_ring_queue_behavior {\n-\tRTE_RING_QUEUE_FIXED = 0, /* Enq/Deq a fixed number of items from a ring */\n-\tRTE_RING_QUEUE_VARIABLE   /* Enq/Deq as many items a possible from ring */\n-};\n-\n-#ifdef RTE_LIBRTE_RING_DEBUG\n-/**\n- * A structure that stores the ring statistics (per-lcore).\n- */\n-struct rte_ring_debug_stats {\n-\tuint64_t enq_success_bulk; /**< Successful enqueues number. */\n-\tuint64_t enq_success_objs; /**< Objects successfully enqueued. */\n-\tuint64_t enq_quota_bulk;   /**< Successful enqueues above watermark. */\n-\tuint64_t enq_quota_objs;   /**< Objects enqueued above watermark. */\n-\tuint64_t enq_fail_bulk;    /**< Failed enqueues number. */\n-\tuint64_t enq_fail_objs;    /**< Objects that failed to be enqueued. */\n-\tuint64_t deq_success_bulk; /**< Successful dequeues number. */\n-\tuint64_t deq_success_objs; /**< Objects successfully dequeued. */\n-\tuint64_t deq_fail_bulk;    /**< Failed dequeues number. */\n-\tuint64_t deq_fail_objs;    /**< Objects that failed to be dequeued. */\n-} __rte_cache_aligned;\n-#endif\n-\n-#define RTE_RING_NAMESIZE 32 /**< The maximum length of a ring name. */\n-#define RTE_RING_MZ_PREFIX \"RG_\"\n-\n-/**\n- * An RTE ring structure.\n- *\n- * The producer and the consumer have a head and a tail index. The particularity\n- * of these index is that they are not between 0 and size(ring). These indexes\n- * are between 0 and 2^32, and we mask their value when we access the ring[]\n- * field. Thanks to this assumption, we can do subtractions between 2 index\n- * values in a modulo-32bit base: that's why the overflow of the indexes is not\n- * a problem.\n- */\n-struct rte_ring {\n-\tchar name[RTE_RING_NAMESIZE];    /**< Name of the ring. */\n-\tint flags;                       /**< Flags supplied at creation. */\n-\n-\t/** Ring producer status. */\n-\tstruct prod {\n-\t\tuint32_t watermark;      /**< Maximum items before EDQUOT. */\n-\t\tuint32_t sp_enqueue;     /**< True, if single producer. */\n-\t\tuint32_t size;           /**< Size of ring. */\n-\t\tuint32_t mask;           /**< Mask (size-1) of ring. */\n-\t\tvolatile uint32_t head;  /**< Producer head. */\n-\t\tvolatile uint32_t tail;  /**< Producer tail. */\n-\t} prod __rte_cache_aligned;\n-\n-\t/** Ring consumer status. */\n-\tstruct cons {\n-\t\tuint32_t sc_dequeue;     /**< True, if single consumer. */\n-\t\tuint32_t size;           /**< Size of the ring. */\n-\t\tuint32_t mask;           /**< Mask (size-1) of ring. */\n-\t\tvolatile uint32_t head;  /**< Consumer head. */\n-\t\tvolatile uint32_t tail;  /**< Consumer tail. */\n-#ifdef RTE_RING_SPLIT_PROD_CONS\n-\t} cons __rte_cache_aligned;\n-#else\n-\t} cons;\n-#endif\n-\n-#ifdef RTE_LIBRTE_RING_DEBUG\n-\tstruct rte_ring_debug_stats stats[RTE_MAX_LCORE];\n-#endif\n-\n-\tvoid * ring[0] __rte_cache_aligned; /**< Memory space of ring starts here.\n-\t                                     * not volatile so need to be careful\n-\t                                     * about compiler re-ordering */\n-};\n-\n-#define RING_F_SP_ENQ 0x0001 /**< The default enqueue is \"single-producer\". */\n-#define RING_F_SC_DEQ 0x0002 /**< The default dequeue is \"single-consumer\". */\n-#define RTE_RING_QUOT_EXCEED (1 << 31)  /**< Quota exceed for burst ops */\n-#define RTE_RING_SZ_MASK  (unsigned)(0x0fffffff) /**< Ring size mask */\n-\n-/**\n- * @internal When debug is enabled, store ring statistics.\n- * @param r\n- *   A pointer to the ring.\n- * @param name\n- *   The name of the statistics field to increment in the ring.\n- * @param n\n- *   The number to add to the object-oriented statistics.\n- */\n-#ifdef RTE_LIBRTE_RING_DEBUG\n-#define __RING_STAT_ADD(r, name, n) do {\t\t\\\n-\t\tunsigned __lcore_id = rte_lcore_id();\t\\\n-\t\tr->stats[__lcore_id].name##_objs += n;\t\\\n-\t\tr->stats[__lcore_id].name##_bulk += 1;\t\\\n-\t} while(0)\n-#else\n-#define __RING_STAT_ADD(r, name, n) do {} while(0)\n-#endif\n-\n-/**\n- * Calculate the memory size needed for a ring\n- *\n- * This function returns the number of bytes needed for a ring, given\n- * the number of elements in it. This value is the sum of the size of\n- * the structure rte_ring and the size of the memory needed by the\n- * objects pointers. The value is aligned to a cache line size.\n- *\n- * @param count\n- *   The number of elements in the ring (must be a power of 2).\n- * @return\n- *   - The memory size needed for the ring on success.\n- *   - -EINVAL if count is not a power of 2.\n- */\n-ssize_t rte_ring_get_memsize(unsigned count);\n-\n-/**\n- * Initialize a ring structure.\n- *\n- * Initialize a ring structure in memory pointed by \"r\". The size of the\n- * memory area must be large enough to store the ring structure and the\n- * object table. It is advised to use rte_ring_get_memsize() to get the\n- * appropriate size.\n- *\n- * The ring size is set to *count*, which must be a power of two. Water\n- * marking is disabled by default. The real usable ring size is\n- * *count-1* instead of *count* to differentiate a free ring from an\n- * empty ring.\n- *\n- * The ring is not added in RTE_TAILQ_RING global list. Indeed, the\n- * memory given by the caller may not be shareable among dpdk\n- * processes.\n- *\n- * @param r\n- *   The pointer to the ring structure followed by the objects table.\n- * @param name\n- *   The name of the ring.\n- * @param count\n- *   The number of elements in the ring (must be a power of 2).\n- * @param flags\n- *   An OR of the following:\n- *    - RING_F_SP_ENQ: If this flag is set, the default behavior when\n- *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``\n- *      is \"single-producer\". Otherwise, it is \"multi-producers\".\n- *    - RING_F_SC_DEQ: If this flag is set, the default behavior when\n- *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``\n- *      is \"single-consumer\". Otherwise, it is \"multi-consumers\".\n- * @return\n- *   0 on success, or a negative value on error.\n- */\n-int rte_ring_init(struct rte_ring *r, const char *name, unsigned count,\n-\tunsigned flags);\n-\n-/**\n- * Create a new ring named *name* in memory.\n- *\n- * This function uses ``memzone_reserve()`` to allocate memory. Then it\n- * calls rte_ring_init() to initialize an empty ring.\n- *\n- * The new ring size is set to *count*, which must be a power of\n- * two. Water marking is disabled by default. The real usable ring size\n- * is *count-1* instead of *count* to differentiate a free ring from an\n- * empty ring.\n- *\n- * The ring is added in RTE_TAILQ_RING list.\n- *\n- * @param name\n- *   The name of the ring.\n- * @param count\n- *   The size of the ring (must be a power of 2).\n- * @param socket_id\n- *   The *socket_id* argument is the socket identifier in case of\n- *   NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA\n- *   constraint for the reserved zone.\n- * @param flags\n- *   An OR of the following:\n- *    - RING_F_SP_ENQ: If this flag is set, the default behavior when\n- *      using ``rte_ring_enqueue()`` or ``rte_ring_enqueue_bulk()``\n- *      is \"single-producer\". Otherwise, it is \"multi-producers\".\n- *    - RING_F_SC_DEQ: If this flag is set, the default behavior when\n- *      using ``rte_ring_dequeue()`` or ``rte_ring_dequeue_bulk()``\n- *      is \"single-consumer\". Otherwise, it is \"multi-consumers\".\n- * @return\n- *   On success, the pointer to the new allocated ring. NULL on error with\n- *    rte_errno set appropriately. Possible errno values include:\n- *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure\n- *    - E_RTE_SECONDARY - function was called from a secondary process instance\n- *    - E_RTE_NO_TAILQ - no tailq list could be got for the ring list\n- *    - EINVAL - count provided is not a power of 2\n- *    - ENOSPC - the maximum number of memzones has already been allocated\n- *    - EEXIST - a memzone with the same name already exists\n- *    - ENOMEM - no appropriate memory area found in which to create memzone\n- */\n-struct rte_ring *rte_ring_create(const char *name, unsigned count,\n-\t\t\t\t int socket_id, unsigned flags);\n-\n-/**\n- * Change the high water mark.\n- *\n- * If *count* is 0, water marking is disabled. Otherwise, it is set to the\n- * *count* value. The *count* value must be greater than 0 and less\n- * than the ring size.\n- *\n- * This function can be called at any time (not necessarily at\n- * initialization).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param count\n- *   The new water mark value.\n- * @return\n- *   - 0: Success; water mark changed.\n- *   - -EINVAL: Invalid water mark value.\n- */\n-int rte_ring_set_water_mark(struct rte_ring *r, unsigned count);\n-\n-/**\n- * Dump the status of the ring to the console.\n- *\n- * @param f\n- *   A pointer to a file for output\n- * @param r\n- *   A pointer to the ring structure.\n- */\n-void rte_ring_dump(FILE *f, const struct rte_ring *r);\n-\n-/* the actual enqueue of pointers on the ring.\n- * Placed here since identical code needed in both\n- * single and multi producer enqueue functions */\n-#define ENQUEUE_PTRS() do { \\\n-\tconst uint32_t size = r->prod.size; \\\n-\tuint32_t idx = prod_head & mask; \\\n-\tif (likely(idx + n < size)) { \\\n-\t\tfor (i = 0; i < (n & ((~(unsigned)0x3))); i+=4, idx+=4) { \\\n-\t\t\tr->ring[idx] = obj_table[i]; \\\n-\t\t\tr->ring[idx+1] = obj_table[i+1]; \\\n-\t\t\tr->ring[idx+2] = obj_table[i+2]; \\\n-\t\t\tr->ring[idx+3] = obj_table[i+3]; \\\n-\t\t} \\\n-\t\tswitch (n & 0x3) { \\\n-\t\t\tcase 3: r->ring[idx++] = obj_table[i++]; \\\n-\t\t\tcase 2: r->ring[idx++] = obj_table[i++]; \\\n-\t\t\tcase 1: r->ring[idx++] = obj_table[i++]; \\\n-\t\t} \\\n-\t} else { \\\n-\t\tfor (i = 0; idx < size; i++, idx++)\\\n-\t\t\tr->ring[idx] = obj_table[i]; \\\n-\t\tfor (idx = 0; i < n; i++, idx++) \\\n-\t\t\tr->ring[idx] = obj_table[i]; \\\n-\t} \\\n-} while(0)\n-\n-/* the actual copy of pointers on the ring to obj_table.\n- * Placed here since identical code needed in both\n- * single and multi consumer dequeue functions */\n-#define DEQUEUE_PTRS() do { \\\n-\tuint32_t idx = cons_head & mask; \\\n-\tconst uint32_t size = r->cons.size; \\\n-\tif (likely(idx + n < size)) { \\\n-\t\tfor (i = 0; i < (n & (~(unsigned)0x3)); i+=4, idx+=4) {\\\n-\t\t\tobj_table[i] = r->ring[idx]; \\\n-\t\t\tobj_table[i+1] = r->ring[idx+1]; \\\n-\t\t\tobj_table[i+2] = r->ring[idx+2]; \\\n-\t\t\tobj_table[i+3] = r->ring[idx+3]; \\\n-\t\t} \\\n-\t\tswitch (n & 0x3) { \\\n-\t\t\tcase 3: obj_table[i++] = r->ring[idx++]; \\\n-\t\t\tcase 2: obj_table[i++] = r->ring[idx++]; \\\n-\t\t\tcase 1: obj_table[i++] = r->ring[idx++]; \\\n-\t\t} \\\n-\t} else { \\\n-\t\tfor (i = 0; idx < size; i++, idx++) \\\n-\t\t\tobj_table[i] = r->ring[idx]; \\\n-\t\tfor (idx = 0; i < n; i++, idx++) \\\n-\t\t\tobj_table[i] = r->ring[idx]; \\\n-\t} \\\n-} while (0)\n-\n-/**\n- * @internal Enqueue several objects on the ring (multi-producers safe).\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * producer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @param behavior\n- *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring\n- *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring\n- * @return\n- *   Depend on the behavior value\n- *   if behavior = RTE_RING_QUEUE_FIXED\n- *   - 0: Success; objects enqueue.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.\n- *   if behavior = RTE_RING_QUEUE_VARIABLE\n- *   - n: Actual number of objects enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-__rte_ring_mp_do_enqueue(struct rte_ring *r, void * const *obj_table,\n-\t\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n-{\n-\tuint32_t prod_head, prod_next;\n-\tuint32_t cons_tail, free_entries;\n-\tconst unsigned max = n;\n-\tint success;\n-\tunsigned i;\n-\tuint32_t mask = r->prod.mask;\n-\tint ret;\n-\n-\t/* move prod.head atomically */\n-\tdo {\n-\t\t/* Reset n to the initial burst count */\n-\t\tn = max;\n-\n-\t\tprod_head = r->prod.head;\n-\t\tcons_tail = r->cons.tail;\n-\t\t/* The subtraction is done between two unsigned 32bits value\n-\t\t * (the result is always modulo 32 bits even if we have\n-\t\t * prod_head > cons_tail). So 'free_entries' is always between 0\n-\t\t * and size(ring)-1. */\n-\t\tfree_entries = (mask + cons_tail - prod_head);\n-\n-\t\t/* check that we have enough room in ring */\n-\t\tif (unlikely(n > free_entries)) {\n-\t\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n-\t\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n-\t\t\t\treturn -ENOBUFS;\n-\t\t\t}\n-\t\t\telse {\n-\t\t\t\t/* No free entry available */\n-\t\t\t\tif (unlikely(free_entries == 0)) {\n-\t\t\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n-\t\t\t\t\treturn 0;\n-\t\t\t\t}\n-\n-\t\t\t\tn = free_entries;\n-\t\t\t}\n-\t\t}\n-\n-\t\tprod_next = prod_head + n;\n-\t\tsuccess = rte_atomic32_cmpset(&r->prod.head, prod_head,\n-\t\t\t\t\t      prod_next);\n-\t} while (unlikely(success == 0));\n-\n-\t/* write entries in ring */\n-\tENQUEUE_PTRS();\n-\trte_compiler_barrier();\n-\n-\t/* if we exceed the watermark */\n-\tif (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {\n-\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :\n-\t\t\t\t(int)(n | RTE_RING_QUOT_EXCEED);\n-\t\t__RING_STAT_ADD(r, enq_quota, n);\n-\t}\n-\telse {\n-\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;\n-\t\t__RING_STAT_ADD(r, enq_success, n);\n-\t}\n-\n-\t/*\n-\t * If there are other enqueues in progress that preceded us,\n-\t * we need to wait for them to complete\n-\t */\n-\twhile (unlikely(r->prod.tail != prod_head))\n-\t\trte_pause();\n-\n-\tr->prod.tail = prod_next;\n-\treturn ret;\n-}\n-\n-/**\n- * @internal Enqueue several objects on a ring (NOT multi-producers safe).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @param behavior\n- *   RTE_RING_QUEUE_FIXED:    Enqueue a fixed number of items from a ring\n- *   RTE_RING_QUEUE_VARIABLE: Enqueue as many items a possible from ring\n- * @return\n- *   Depend on the behavior value\n- *   if behavior = RTE_RING_QUEUE_FIXED\n- *   - 0: Success; objects enqueue.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.\n- *   if behavior = RTE_RING_QUEUE_VARIABLE\n- *   - n: Actual number of objects enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-__rte_ring_sp_do_enqueue(struct rte_ring *r, void * const *obj_table,\n-\t\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n-{\n-\tuint32_t prod_head, cons_tail;\n-\tuint32_t prod_next, free_entries;\n-\tunsigned i;\n-\tuint32_t mask = r->prod.mask;\n-\tint ret;\n-\n-\tprod_head = r->prod.head;\n-\tcons_tail = r->cons.tail;\n-\t/* The subtraction is done between two unsigned 32bits value\n-\t * (the result is always modulo 32 bits even if we have\n-\t * prod_head > cons_tail). So 'free_entries' is always between 0\n-\t * and size(ring)-1. */\n-\tfree_entries = mask + cons_tail - prod_head;\n-\n-\t/* check that we have enough room in ring */\n-\tif (unlikely(n > free_entries)) {\n-\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n-\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n-\t\t\treturn -ENOBUFS;\n-\t\t}\n-\t\telse {\n-\t\t\t/* No free entry available */\n-\t\t\tif (unlikely(free_entries == 0)) {\n-\t\t\t\t__RING_STAT_ADD(r, enq_fail, n);\n-\t\t\t\treturn 0;\n-\t\t\t}\n-\n-\t\t\tn = free_entries;\n-\t\t}\n-\t}\n-\n-\tprod_next = prod_head + n;\n-\tr->prod.head = prod_next;\n-\n-\t/* write entries in ring */\n-\tENQUEUE_PTRS();\n-\trte_compiler_barrier();\n-\n-\t/* if we exceed the watermark */\n-\tif (unlikely(((mask + 1) - free_entries + n) > r->prod.watermark)) {\n-\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? -EDQUOT :\n-\t\t\t(int)(n | RTE_RING_QUOT_EXCEED);\n-\t\t__RING_STAT_ADD(r, enq_quota, n);\n-\t}\n-\telse {\n-\t\tret = (behavior == RTE_RING_QUEUE_FIXED) ? 0 : n;\n-\t\t__RING_STAT_ADD(r, enq_success, n);\n-\t}\n-\n-\tr->prod.tail = prod_next;\n-\treturn ret;\n-}\n-\n-/**\n- * @internal Dequeue several objects from a ring (multi-consumers safe). When\n- * the request objects are more than the available objects, only dequeue the\n- * actual number of objects\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * consumer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @param behavior\n- *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring\n- *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring\n- * @return\n- *   Depend on the behavior value\n- *   if behavior = RTE_RING_QUEUE_FIXED\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n- *     dequeued.\n- *   if behavior = RTE_RING_QUEUE_VARIABLE\n- *   - n: Actual number of objects dequeued.\n- */\n-\n-static inline int __attribute__((always_inline))\n-__rte_ring_mc_do_dequeue(struct rte_ring *r, void **obj_table,\n-\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n-{\n-\tuint32_t cons_head, prod_tail;\n-\tuint32_t cons_next, entries;\n-\tconst unsigned max = n;\n-\tint success;\n-\tunsigned i;\n-\tuint32_t mask = r->prod.mask;\n-\n-\t/* move cons.head atomically */\n-\tdo {\n-\t\t/* Restore n as it may change every loop */\n-\t\tn = max;\n-\n-\t\tcons_head = r->cons.head;\n-\t\tprod_tail = r->prod.tail;\n-\t\t/* The subtraction is done between two unsigned 32bits value\n-\t\t * (the result is always modulo 32 bits even if we have\n-\t\t * cons_head > prod_tail). So 'entries' is always between 0\n-\t\t * and size(ring)-1. */\n-\t\tentries = (prod_tail - cons_head);\n-\n-\t\t/* Set the actual entries for dequeue */\n-\t\tif (n > entries) {\n-\t\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n-\t\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n-\t\t\t\treturn -ENOENT;\n-\t\t\t}\n-\t\t\telse {\n-\t\t\t\tif (unlikely(entries == 0)){\n-\t\t\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n-\t\t\t\t\treturn 0;\n-\t\t\t\t}\n-\n-\t\t\t\tn = entries;\n-\t\t\t}\n-\t\t}\n-\n-\t\tcons_next = cons_head + n;\n-\t\tsuccess = rte_atomic32_cmpset(&r->cons.head, cons_head,\n-\t\t\t\t\t      cons_next);\n-\t} while (unlikely(success == 0));\n-\n-\t/* copy in table */\n-\tDEQUEUE_PTRS();\n-\trte_compiler_barrier();\n-\n-\t/*\n-\t * If there are other dequeues in progress that preceded us,\n-\t * we need to wait for them to complete\n-\t */\n-\twhile (unlikely(r->cons.tail != cons_head))\n-\t\trte_pause();\n-\n-\t__RING_STAT_ADD(r, deq_success, n);\n-\tr->cons.tail = cons_next;\n-\n-\treturn behavior == RTE_RING_QUEUE_FIXED ? 0 : n;\n-}\n-\n-/**\n- * @internal Dequeue several objects from a ring (NOT multi-consumers safe).\n- * When the request objects are more than the available objects, only dequeue\n- * the actual number of objects\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @param behavior\n- *   RTE_RING_QUEUE_FIXED:    Dequeue a fixed number of items from a ring\n- *   RTE_RING_QUEUE_VARIABLE: Dequeue as many items a possible from ring\n- * @return\n- *   Depend on the behavior value\n- *   if behavior = RTE_RING_QUEUE_FIXED\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n- *     dequeued.\n- *   if behavior = RTE_RING_QUEUE_VARIABLE\n- *   - n: Actual number of objects dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-__rte_ring_sc_do_dequeue(struct rte_ring *r, void **obj_table,\n-\t\t unsigned n, enum rte_ring_queue_behavior behavior)\n-{\n-\tuint32_t cons_head, prod_tail;\n-\tuint32_t cons_next, entries;\n-\tunsigned i;\n-\tuint32_t mask = r->prod.mask;\n-\n-\tcons_head = r->cons.head;\n-\tprod_tail = r->prod.tail;\n-\t/* The subtraction is done between two unsigned 32bits value\n-\t * (the result is always modulo 32 bits even if we have\n-\t * cons_head > prod_tail). So 'entries' is always between 0\n-\t * and size(ring)-1. */\n-\tentries = prod_tail - cons_head;\n-\n-\tif (n > entries) {\n-\t\tif (behavior == RTE_RING_QUEUE_FIXED) {\n-\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n-\t\t\treturn -ENOENT;\n-\t\t}\n-\t\telse {\n-\t\t\tif (unlikely(entries == 0)){\n-\t\t\t\t__RING_STAT_ADD(r, deq_fail, n);\n-\t\t\t\treturn 0;\n-\t\t\t}\n-\n-\t\t\tn = entries;\n-\t\t}\n-\t}\n-\n-\tcons_next = cons_head + n;\n-\tr->cons.head = cons_next;\n-\n-\t/* copy in table */\n-\tDEQUEUE_PTRS();\n-\trte_compiler_barrier();\n-\n-\t__RING_STAT_ADD(r, deq_success, n);\n-\tr->cons.tail = cons_next;\n-\treturn behavior == RTE_RING_QUEUE_FIXED ? 0 : n;\n-}\n-\n-/**\n- * Enqueue several objects on the ring (multi-producers safe).\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * producer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @return\n- *   - 0: Success; objects enqueue.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue, no object is enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_mp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,\n-\t\t\t unsigned n)\n-{\n-\treturn __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n-}\n-\n-/**\n- * Enqueue several objects on a ring (NOT multi-producers safe).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @return\n- *   - 0: Success; objects enqueued.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_sp_enqueue_bulk(struct rte_ring *r, void * const *obj_table,\n-\t\t\t unsigned n)\n-{\n-\treturn __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n-}\n-\n-/**\n- * Enqueue several objects on a ring.\n- *\n- * This function calls the multi-producer or the single-producer\n- * version depending on the default behavior that was specified at\n- * ring creation time (see flags).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @return\n- *   - 0: Success; objects enqueued.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,\n-\t\t      unsigned n)\n-{\n-\tif (r->prod.sp_enqueue)\n-\t\treturn rte_ring_sp_enqueue_bulk(r, obj_table, n);\n-\telse\n-\t\treturn rte_ring_mp_enqueue_bulk(r, obj_table, n);\n-}\n-\n-/**\n- * Enqueue one object on a ring (multi-producers safe).\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * producer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj\n- *   A pointer to the object to be added.\n- * @return\n- *   - 0: Success; objects enqueued.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_mp_enqueue(struct rte_ring *r, void *obj)\n-{\n-\treturn rte_ring_mp_enqueue_bulk(r, &obj, 1);\n-}\n-\n-/**\n- * Enqueue one object on a ring (NOT multi-producers safe).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj\n- *   A pointer to the object to be added.\n- * @return\n- *   - 0: Success; objects enqueued.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_sp_enqueue(struct rte_ring *r, void *obj)\n-{\n-\treturn rte_ring_sp_enqueue_bulk(r, &obj, 1);\n-}\n-\n-/**\n- * Enqueue one object on a ring.\n- *\n- * This function calls the multi-producer or the single-producer\n- * version, depending on the default behaviour that was specified at\n- * ring creation time (see flags).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj\n- *   A pointer to the object to be added.\n- * @return\n- *   - 0: Success; objects enqueued.\n- *   - -EDQUOT: Quota exceeded. The objects have been enqueued, but the\n- *     high water mark is exceeded.\n- *   - -ENOBUFS: Not enough room in the ring to enqueue; no object is enqueued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_enqueue(struct rte_ring *r, void *obj)\n-{\n-\tif (r->prod.sp_enqueue)\n-\t\treturn rte_ring_sp_enqueue(r, obj);\n-\telse\n-\t\treturn rte_ring_mp_enqueue(r, obj);\n-}\n-\n-/**\n- * Dequeue several objects from a ring (multi-consumers safe).\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * consumer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @return\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n- *     dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_mc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)\n-{\n-\treturn __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n-}\n-\n-/**\n- * Dequeue several objects from a ring (NOT multi-consumers safe).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table,\n- *   must be strictly positive.\n- * @return\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n- *     dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_sc_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)\n-{\n-\treturn __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_FIXED);\n-}\n-\n-/**\n- * Dequeue several objects from a ring.\n- *\n- * This function calls the multi-consumers or the single-consumer\n- * version, depending on the default behaviour that was specified at\n- * ring creation time (see flags).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @return\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is\n- *     dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_dequeue_bulk(struct rte_ring *r, void **obj_table, unsigned n)\n-{\n-\tif (r->cons.sc_dequeue)\n-\t\treturn rte_ring_sc_dequeue_bulk(r, obj_table, n);\n-\telse\n-\t\treturn rte_ring_mc_dequeue_bulk(r, obj_table, n);\n-}\n-\n-/**\n- * Dequeue one object from a ring (multi-consumers safe).\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * consumer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_p\n- *   A pointer to a void * pointer (object) that will be filled.\n- * @return\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue; no object is\n- *     dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_mc_dequeue(struct rte_ring *r, void **obj_p)\n-{\n-\treturn rte_ring_mc_dequeue_bulk(r, obj_p, 1);\n-}\n-\n-/**\n- * Dequeue one object from a ring (NOT multi-consumers safe).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_p\n- *   A pointer to a void * pointer (object) that will be filled.\n- * @return\n- *   - 0: Success; objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is\n- *     dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_sc_dequeue(struct rte_ring *r, void **obj_p)\n-{\n-\treturn rte_ring_sc_dequeue_bulk(r, obj_p, 1);\n-}\n-\n-/**\n- * Dequeue one object from a ring.\n- *\n- * This function calls the multi-consumers or the single-consumer\n- * version depending on the default behaviour that was specified at\n- * ring creation time (see flags).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_p\n- *   A pointer to a void * pointer (object) that will be filled.\n- * @return\n- *   - 0: Success, objects dequeued.\n- *   - -ENOENT: Not enough entries in the ring to dequeue, no object is\n- *     dequeued.\n- */\n-static inline int __attribute__((always_inline))\n-rte_ring_dequeue(struct rte_ring *r, void **obj_p)\n-{\n-\tif (r->cons.sc_dequeue)\n-\t\treturn rte_ring_sc_dequeue(r, obj_p);\n-\telse\n-\t\treturn rte_ring_mc_dequeue(r, obj_p);\n-}\n-\n-/**\n- * Test if a ring is full.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @return\n- *   - 1: The ring is full.\n- *   - 0: The ring is not full.\n- */\n-static inline int\n-rte_ring_full(const struct rte_ring *r)\n-{\n-\tuint32_t prod_tail = r->prod.tail;\n-\tuint32_t cons_tail = r->cons.tail;\n-\treturn (((cons_tail - prod_tail - 1) & r->prod.mask) == 0);\n-}\n-\n-/**\n- * Test if a ring is empty.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @return\n- *   - 1: The ring is empty.\n- *   - 0: The ring is not empty.\n- */\n-static inline int\n-rte_ring_empty(const struct rte_ring *r)\n-{\n-\tuint32_t prod_tail = r->prod.tail;\n-\tuint32_t cons_tail = r->cons.tail;\n-\treturn !!(cons_tail == prod_tail);\n-}\n-\n-/**\n- * Return the number of entries in a ring.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @return\n- *   The number of entries in the ring.\n- */\n-static inline unsigned\n-rte_ring_count(const struct rte_ring *r)\n-{\n-\tuint32_t prod_tail = r->prod.tail;\n-\tuint32_t cons_tail = r->cons.tail;\n-\treturn ((prod_tail - cons_tail) & r->prod.mask);\n-}\n-\n-/**\n- * Return the number of free entries in a ring.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @return\n- *   The number of free entries in the ring.\n- */\n-static inline unsigned\n-rte_ring_free_count(const struct rte_ring *r)\n-{\n-\tuint32_t prod_tail = r->prod.tail;\n-\tuint32_t cons_tail = r->cons.tail;\n-\treturn ((cons_tail - prod_tail - 1) & r->prod.mask);\n-}\n-\n-/**\n- * Dump the status of all rings on the console\n- *\n- * @param f\n- *   A pointer to a file for output\n- */\n-void rte_ring_list_dump(FILE *f);\n-\n-/**\n- * Search a ring from its name\n- *\n- * @param name\n- *   The name of the ring.\n- * @return\n- *   The pointer to the ring matching the name, or NULL if not found,\n- *   with rte_errno set appropriately. Possible rte_errno values include:\n- *    - ENOENT - required entry not available to return.\n- */\n-struct rte_ring *rte_ring_lookup(const char *name);\n-\n-/**\n- * Enqueue several objects on the ring (multi-producers safe).\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * producer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @return\n- *   - n: Actual number of objects enqueued.\n- */\n-static inline unsigned __attribute__((always_inline))\n-rte_ring_mp_enqueue_burst(struct rte_ring *r, void * const *obj_table,\n-\t\t\t unsigned n)\n-{\n-\treturn __rte_ring_mp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n-}\n-\n-/**\n- * Enqueue several objects on a ring (NOT multi-producers safe).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @return\n- *   - n: Actual number of objects enqueued.\n- */\n-static inline unsigned __attribute__((always_inline))\n-rte_ring_sp_enqueue_burst(struct rte_ring *r, void * const *obj_table,\n-\t\t\t unsigned n)\n-{\n-\treturn __rte_ring_sp_do_enqueue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n-}\n-\n-/**\n- * Enqueue several objects on a ring.\n- *\n- * This function calls the multi-producer or the single-producer\n- * version depending on the default behavior that was specified at\n- * ring creation time (see flags).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects).\n- * @param n\n- *   The number of objects to add in the ring from the obj_table.\n- * @return\n- *   - n: Actual number of objects enqueued.\n- */\n-static inline unsigned __attribute__((always_inline))\n-rte_ring_enqueue_burst(struct rte_ring *r, void * const *obj_table,\n-\t\t      unsigned n)\n-{\n-\tif (r->prod.sp_enqueue)\n-\t\treturn rte_ring_sp_enqueue_burst(r, obj_table, n);\n-\telse\n-\t\treturn rte_ring_mp_enqueue_burst(r, obj_table, n);\n-}\n-\n-/**\n- * Dequeue several objects from a ring (multi-consumers safe). When the request\n- * objects are more than the available objects, only dequeue the actual number\n- * of objects\n- *\n- * This function uses a \"compare and set\" instruction to move the\n- * consumer index atomically.\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @return\n- *   - n: Actual number of objects dequeued, 0 if ring is empty\n- */\n-static inline unsigned __attribute__((always_inline))\n-rte_ring_mc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)\n-{\n-\treturn __rte_ring_mc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n-}\n-\n-/**\n- * Dequeue several objects from a ring (NOT multi-consumers safe).When the\n- * request objects are more than the available objects, only dequeue the\n- * actual number of objects\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @return\n- *   - n: Actual number of objects dequeued, 0 if ring is empty\n- */\n-static inline unsigned __attribute__((always_inline))\n-rte_ring_sc_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)\n-{\n-\treturn __rte_ring_sc_do_dequeue(r, obj_table, n, RTE_RING_QUEUE_VARIABLE);\n-}\n-\n-/**\n- * Dequeue multiple objects from a ring up to a maximum number.\n- *\n- * This function calls the multi-consumers or the single-consumer\n- * version, depending on the default behaviour that was specified at\n- * ring creation time (see flags).\n- *\n- * @param r\n- *   A pointer to the ring structure.\n- * @param obj_table\n- *   A pointer to a table of void * pointers (objects) that will be filled.\n- * @param n\n- *   The number of objects to dequeue from the ring to the obj_table.\n- * @return\n- *   - Number of objects dequeued\n- */\n-static inline unsigned __attribute__((always_inline))\n-rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table, unsigned n)\n-{\n-\tif (r->cons.sc_dequeue)\n-\t\treturn rte_ring_sc_dequeue_burst(r, obj_table, n);\n-\telse\n-\t\treturn rte_ring_mc_dequeue_burst(r, obj_table, n);\n-}\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* _RTE_RING_H_ */\n",
    "prefixes": [
        "dpdk-dev",
        "RFC",
        "07/13"
    ]
}