get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/26326/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 26326,
    "url": "https://patches.dpdk.org/api/patches/26326/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1499156063-263699-2-git-send-email-david.hunt@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1499156063-263699-2-git-send-email-david.hunt@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1499156063-263699-2-git-send-email-david.hunt@intel.com",
    "date": "2017-07-04T08:14:21",
    "name": "[dpdk-dev,v6,1/3] examples/eventdev_pipeline: added sample app",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "5e7f962ca7f6004189ca85bb54a373bfdb5cad4a",
    "submitter": {
        "id": 342,
        "url": "https://patches.dpdk.org/api/people/342/?format=api",
        "name": "Hunt, David",
        "email": "david.hunt@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1499156063-263699-2-git-send-email-david.hunt@intel.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/26326/comments/",
    "check": "success",
    "checks": "https://patches.dpdk.org/api/patches/26326/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 66BE4567E;\n\tTue,  4 Jul 2017 10:15:50 +0200 (CEST)",
            "from mga09.intel.com (mga09.intel.com [134.134.136.24])\n\tby dpdk.org (Postfix) with ESMTP id 048B420F\n\tfor <dev@dpdk.org>; Tue,  4 Jul 2017 10:15:44 +0200 (CEST)",
            "from fmsmga001.fm.intel.com ([10.253.24.23])\n\tby orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t04 Jul 2017 01:15:43 -0700",
            "from silpixa00397898.ir.intel.com ([10.237.223.116])\n\tby fmsmga001.fm.intel.com with ESMTP; 04 Jul 2017 01:14:27 -0700"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos; i=\"5.40,307,1496127600\"; d=\"scan'208\";\n\ta=\"1167853350\"",
        "From": "David Hunt <david.hunt@intel.com>",
        "To": "dev@dpdk.org",
        "Cc": "jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com,\n\tGage Eads <gage.eads@intel.com>,\n\tBruce Richardson <bruce.richardson@intel.com>,\n\tDavid Hunt <david.hunt@intel.com>",
        "Date": "Tue,  4 Jul 2017 09:14:21 +0100",
        "Message-Id": "<1499156063-263699-2-git-send-email-david.hunt@intel.com>",
        "X-Mailer": "git-send-email 2.7.4",
        "In-Reply-To": "<1499156063-263699-1-git-send-email-david.hunt@intel.com>",
        "References": "<1498830673-56759-2-git-send-email-david.hunt@intel.com>\n\t<1499156063-263699-1-git-send-email-david.hunt@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v6 1/3] examples/eventdev_pipeline: added sample\n\tapp",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Harry van Haaren <harry.van.haaren@intel.com>\n\nThis commit adds a sample app for the eventdev library.\nThe app has been tested with DPDK 17.05-rc2, hence this\nrelease (or later) is recommended.\n\nThe sample app showcases a pipeline processing use-case,\nwith event scheduling and processing defined per stage.\nThe application receives traffic as normal, with each\npacket traversing the pipeline. Once the packet has\nbeen processed by each of the pipeline stages, it is\ntransmitted again.\n\nThe app provides a framework to utilize cores for a single\nrole or multiple roles. Examples of roles are the RX core,\nTX core, Scheduling core (in the case of the event/sw PMD),\nand worker cores.\n\nVarious flags are available to configure numbers of stages,\ncycles of work at each stage, type of scheduling, number of\nworker cores, queue depths etc. For a full explaination,\nplease refer to the documentation.\n\nSigned-off-by: Gage Eads <gage.eads@intel.com>\nSigned-off-by: Bruce Richardson <bruce.richardson@intel.com>\nSigned-off-by: Harry van Haaren <harry.van.haaren@intel.com>\nSigned-off-by: David Hunt <david.hunt@intel.com>\nAcked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>\n---\n MAINTAINERS                                |    1 +\n examples/Makefile                          |    2 +\n examples/eventdev_pipeline_sw_pmd/Makefile |   49 ++\n examples/eventdev_pipeline_sw_pmd/main.c   | 1005 ++++++++++++++++++++++++++++\n 4 files changed, 1057 insertions(+)\n create mode 100644 examples/eventdev_pipeline_sw_pmd/Makefile\n create mode 100644 examples/eventdev_pipeline_sw_pmd/main.c",
    "diff": "diff --git a/MAINTAINERS b/MAINTAINERS\nindex d9dbf8f..860621a 100644\n--- a/MAINTAINERS\n+++ b/MAINTAINERS\n@@ -579,6 +579,7 @@ M: Harry van Haaren <harry.van.haaren@intel.com>\n F: drivers/event/sw/\n F: test/test/test_eventdev_sw.c\n F: doc/guides/eventdevs/sw.rst\n+F: examples/eventdev_pipeline_sw_pmd/\n \n NXP DPAA2 Eventdev PMD\n M: Hemant Agrawal <hemant.agrawal@nxp.com>\ndiff --git a/examples/Makefile b/examples/Makefile\nindex c0e9c3b..97f12ad 100644\n--- a/examples/Makefile\n+++ b/examples/Makefile\n@@ -100,4 +100,6 @@ $(info vm_power_manager requires libvirt >= 0.9.3)\n endif\n endif\n \n+DIRS-y += eventdev_pipeline_sw_pmd\n+\n include $(RTE_SDK)/mk/rte.extsubdir.mk\ndiff --git a/examples/eventdev_pipeline_sw_pmd/Makefile b/examples/eventdev_pipeline_sw_pmd/Makefile\nnew file mode 100644\nindex 0000000..de4e22c\n--- /dev/null\n+++ b/examples/eventdev_pipeline_sw_pmd/Makefile\n@@ -0,0 +1,49 @@\n+#   BSD LICENSE\n+#\n+#   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.\n+#\n+#   Redistribution and use in source and binary forms, with or without\n+#   modification, are permitted provided that the following conditions\n+#   are met:\n+#\n+#     * Redistributions of source code must retain the above copyright\n+#       notice, this list of conditions and the following disclaimer.\n+#     * Redistributions in binary form must reproduce the above copyright\n+#       notice, this list of conditions and the following disclaimer in\n+#       the documentation and/or other materials provided with the\n+#       distribution.\n+#     * Neither the name of Intel Corporation nor the names of its\n+#       contributors may be used to endorse or promote products derived\n+#       from this software without specific prior written permission.\n+#\n+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+ifeq ($(RTE_SDK),)\n+$(error \"Please define RTE_SDK environment variable\")\n+endif\n+\n+# Default target, can be overridden by command line or environment\n+RTE_TARGET ?= x86_64-native-linuxapp-gcc\n+\n+include $(RTE_SDK)/mk/rte.vars.mk\n+\n+# binary name\n+APP = eventdev_pipeline_sw_pmd\n+\n+# all source are stored in SRCS-y\n+SRCS-y := main.c\n+\n+CFLAGS += -O3\n+CFLAGS += $(WERROR_FLAGS)\n+\n+include $(RTE_SDK)/mk/rte.extapp.mk\ndiff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c\nnew file mode 100644\nindex 0000000..c62cba2\n--- /dev/null\n+++ b/examples/eventdev_pipeline_sw_pmd/main.c\n@@ -0,0 +1,1005 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <getopt.h>\n+#include <stdint.h>\n+#include <stdio.h>\n+#include <signal.h>\n+#include <sched.h>\n+#include <stdbool.h>\n+\n+#include <rte_eal.h>\n+#include <rte_mempool.h>\n+#include <rte_mbuf.h>\n+#include <rte_launch.h>\n+#include <rte_malloc.h>\n+#include <rte_random.h>\n+#include <rte_cycles.h>\n+#include <rte_ethdev.h>\n+#include <rte_eventdev.h>\n+\n+#define MAX_NUM_STAGES 8\n+#define BATCH_SIZE 16\n+#define MAX_NUM_CORE 64\n+\n+static unsigned int active_cores;\n+static unsigned int num_workers;\n+static long num_packets = (1L << 25); /* do ~32M packets */\n+static unsigned int num_fids = 512;\n+static unsigned int num_stages = 1;\n+static unsigned int worker_cq_depth = 16;\n+static int queue_type = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY;\n+static int16_t next_qid[MAX_NUM_STAGES+1] = {-1};\n+static int16_t qid[MAX_NUM_STAGES] = {-1};\n+static int worker_cycles;\n+static int enable_queue_priorities;\n+\n+struct prod_data {\n+\tuint8_t dev_id;\n+\tuint8_t port_id;\n+\tint32_t qid;\n+\tunsigned int num_nic_ports;\n+} __rte_cache_aligned;\n+\n+struct cons_data {\n+\tuint8_t dev_id;\n+\tuint8_t port_id;\n+} __rte_cache_aligned;\n+\n+static struct prod_data prod_data;\n+static struct cons_data cons_data;\n+\n+struct worker_data {\n+\tuint8_t dev_id;\n+\tuint8_t port_id;\n+} __rte_cache_aligned;\n+\n+struct fastpath_data {\n+\tvolatile int done;\n+\tuint32_t rx_lock;\n+\tuint32_t tx_lock;\n+\tuint32_t sched_lock;\n+\tbool rx_single;\n+\tbool tx_single;\n+\tbool sched_single;\n+\tunsigned int rx_core[MAX_NUM_CORE];\n+\tunsigned int tx_core[MAX_NUM_CORE];\n+\tunsigned int sched_core[MAX_NUM_CORE];\n+\tunsigned int worker_core[MAX_NUM_CORE];\n+\tstruct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS];\n+};\n+\n+static struct fastpath_data *fdata;\n+\n+struct config_data {\n+\tint quiet;\n+\tint dump_dev;\n+\tint dump_dev_signal;\n+};\n+\n+static struct config_data cdata;\n+\n+static bool\n+core_in_use(unsigned int lcore_id) {\n+\treturn (fdata->rx_core[lcore_id] || fdata->sched_core[lcore_id] ||\n+\t\tfdata->tx_core[lcore_id] || fdata->worker_core[lcore_id]);\n+}\n+\n+\n+static void\n+eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent,\n+\t\t\tvoid *userdata)\n+{\n+\tint port_id = (uintptr_t) userdata;\n+\tunsigned int _sent = 0;\n+\n+\tdo {\n+\t\t/* Note: hard-coded TX queue */\n+\t\t_sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent],\n+\t\t\t\t\t  unsent - _sent);\n+\t} while (_sent != unsent);\n+}\n+\n+static int\n+consumer(void)\n+{\n+\tconst uint64_t freq_khz = rte_get_timer_hz() / 1000;\n+\tstruct rte_event packets[BATCH_SIZE];\n+\n+\tstatic uint64_t received;\n+\tstatic uint64_t last_pkts;\n+\tstatic uint64_t last_time;\n+\tstatic uint64_t start_time;\n+\tunsigned int i, j;\n+\tuint8_t dev_id = cons_data.dev_id;\n+\tuint8_t port_id = cons_data.port_id;\n+\n+\tuint16_t n = rte_event_dequeue_burst(dev_id, port_id,\n+\t\t\tpackets, RTE_DIM(packets), 0);\n+\n+\tif (n == 0) {\n+\t\tfor (j = 0; j < rte_eth_dev_count(); j++)\n+\t\t\trte_eth_tx_buffer_flush(j, 0, fdata->tx_buf[j]);\n+\t\treturn 0;\n+\t}\n+\tif (start_time == 0)\n+\t\tlast_time = start_time = rte_get_timer_cycles();\n+\n+\treceived += n;\n+\tfor (i = 0; i < n; i++) {\n+\t\tuint8_t outport = packets[i].mbuf->port;\n+\t\trte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport],\n+\t\t\t\tpackets[i].mbuf);\n+\t}\n+\n+\t/* Print out mpps every 1<22 packets */\n+\tif (!cdata.quiet && received >= last_pkts + (1<<22)) {\n+\t\tconst uint64_t now = rte_get_timer_cycles();\n+\t\tconst uint64_t total_ms = (now - start_time) / freq_khz;\n+\t\tconst uint64_t delta_ms = (now - last_time) / freq_khz;\n+\t\tuint64_t delta_pkts = received - last_pkts;\n+\n+\t\tprintf(\"# consumer RX=%\"PRIu64\", time %\"PRIu64 \"ms, \"\n+\t\t\t\"avg %.3f mpps [current %.3f mpps]\\n\",\n+\t\t\t\treceived,\n+\t\t\t\ttotal_ms,\n+\t\t\t\treceived / (total_ms * 1000.0),\n+\t\t\t\tdelta_pkts / (delta_ms * 1000.0));\n+\t\tlast_pkts = received;\n+\t\tlast_time = now;\n+\t}\n+\n+\tnum_packets -= n;\n+\tif (num_packets <= 0)\n+\t\tfdata->done = 1;\n+\n+\treturn 0;\n+}\n+\n+static int\n+producer(void)\n+{\n+\tstatic uint8_t eth_port;\n+\tstruct rte_mbuf *mbufs[BATCH_SIZE+2];\n+\tstruct rte_event ev[BATCH_SIZE+2];\n+\tuint32_t i, num_ports = prod_data.num_nic_ports;\n+\tint32_t qid = prod_data.qid;\n+\tuint8_t dev_id = prod_data.dev_id;\n+\tuint8_t port_id = prod_data.port_id;\n+\tuint32_t prio_idx = 0;\n+\n+\tconst uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);\n+\tif (++eth_port == num_ports)\n+\t\teth_port = 0;\n+\tif (nb_rx == 0) {\n+\t\trte_pause();\n+\t\treturn 0;\n+\t}\n+\n+\tfor (i = 0; i < nb_rx; i++) {\n+\t\tev[i].flow_id = mbufs[i]->hash.rss;\n+\t\tev[i].op = RTE_EVENT_OP_NEW;\n+\t\tev[i].sched_type = queue_type;\n+\t\tev[i].queue_id = qid;\n+\t\tev[i].event_type = RTE_EVENT_TYPE_ETHDEV;\n+\t\tev[i].sub_event_type = 0;\n+\t\tev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;\n+\t\tev[i].mbuf = mbufs[i];\n+\t\tRTE_SET_USED(prio_idx);\n+\t}\n+\n+\tconst int nb_tx = rte_event_enqueue_burst(dev_id, port_id, ev, nb_rx);\n+\tif (nb_tx != nb_rx) {\n+\t\tfor (i = nb_tx; i < nb_rx; i++)\n+\t\t\trte_pktmbuf_free(mbufs[i]);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static inline void\n+schedule_devices(uint8_t dev_id, unsigned int lcore_id)\n+{\n+\tif (fdata->rx_core[lcore_id] && (fdata->rx_single ||\n+\t    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {\n+\t\tproducer();\n+\t\trte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));\n+\t}\n+\n+\tif (fdata->sched_core[lcore_id] && (fdata->sched_single ||\n+\t    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {\n+\t\trte_event_schedule(dev_id);\n+\t\tif (cdata.dump_dev_signal) {\n+\t\t\trte_event_dev_dump(0, stdout);\n+\t\t\tcdata.dump_dev_signal = 0;\n+\t\t}\n+\t\trte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));\n+\t}\n+\n+\tif (fdata->tx_core[lcore_id] && (fdata->tx_single ||\n+\t    rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {\n+\t\tconsumer();\n+\t\trte_atomic32_clear((rte_atomic32_t *)&(fdata->tx_lock));\n+\t}\n+}\n+\n+\n+\n+static inline void\n+work(struct rte_mbuf *m)\n+{\n+\tstruct ether_hdr *eth;\n+\tstruct ether_addr addr;\n+\n+\t/* change mac addresses on packet (to use mbuf data) */\n+\teth = rte_pktmbuf_mtod(m, struct ether_hdr *);\n+\tether_addr_copy(&eth->d_addr, &addr);\n+\tether_addr_copy(&eth->s_addr, &eth->d_addr);\n+\tether_addr_copy(&addr, &eth->s_addr);\n+\n+\t/* do a number of cycles of work per packet */\n+\tvolatile uint64_t start_tsc = rte_rdtsc();\n+\twhile (rte_rdtsc() < start_tsc + worker_cycles)\n+\t\trte_pause();\n+}\n+\n+static int\n+worker(void *arg)\n+{\n+\tstruct rte_event events[BATCH_SIZE];\n+\n+\tstruct worker_data *data = (struct worker_data *)arg;\n+\tuint8_t dev_id = data->dev_id;\n+\tuint8_t port_id = data->port_id;\n+\tsize_t sent = 0, received = 0;\n+\tunsigned int lcore_id = rte_lcore_id();\n+\n+\twhile (!fdata->done) {\n+\t\tuint16_t i;\n+\n+\t\tschedule_devices(dev_id, lcore_id);\n+\n+\t\tif (!fdata->worker_core[lcore_id]) {\n+\t\t\trte_pause();\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tconst uint16_t nb_rx = rte_event_dequeue_burst(dev_id, port_id,\n+\t\t\t\tevents, RTE_DIM(events), 0);\n+\n+\t\tif (nb_rx == 0) {\n+\t\t\trte_pause();\n+\t\t\tcontinue;\n+\t\t}\n+\t\treceived += nb_rx;\n+\n+\t\tfor (i = 0; i < nb_rx; i++) {\n+\n+\t\t\t/* The first worker stage does classification */\n+\t\t\tif (events[i].queue_id == qid[0])\n+\t\t\t\tevents[i].flow_id = events[i].mbuf->hash.rss\n+\t\t\t\t\t\t\t% num_fids;\n+\n+\t\t\tevents[i].queue_id = next_qid[events[i].queue_id];\n+\t\t\tevents[i].op = RTE_EVENT_OP_FORWARD;\n+\t\t\tevents[i].sched_type = queue_type;\n+\n+\t\t\twork(events[i].mbuf);\n+\t\t}\n+\t\tuint16_t nb_tx = rte_event_enqueue_burst(dev_id, port_id,\n+\t\t\t\tevents, nb_rx);\n+\t\twhile (nb_tx < nb_rx && !fdata->done)\n+\t\t\tnb_tx += rte_event_enqueue_burst(dev_id, port_id,\n+\t\t\t\t\t\t\tevents + nb_tx,\n+\t\t\t\t\t\t\tnb_rx - nb_tx);\n+\t\tsent += nb_tx;\n+\t}\n+\n+\tif (!cdata.quiet)\n+\t\tprintf(\"  worker %u thread done. RX=%zu TX=%zu\\n\",\n+\t\t\t\trte_lcore_id(), received, sent);\n+\n+\treturn 0;\n+}\n+\n+/*\n+ * Parse the coremask given as argument (hexadecimal string) and fill\n+ * the global configuration (core role and core count) with the parsed\n+ * value.\n+ */\n+static int xdigit2val(unsigned char c)\n+{\n+\tint val;\n+\n+\tif (isdigit(c))\n+\t\tval = c - '0';\n+\telse if (isupper(c))\n+\t\tval = c - 'A' + 10;\n+\telse\n+\t\tval = c - 'a' + 10;\n+\treturn val;\n+}\n+\n+static uint64_t\n+parse_coremask(const char *coremask)\n+{\n+\tint i, j, idx = 0;\n+\tunsigned int count = 0;\n+\tchar c;\n+\tint val;\n+\tuint64_t mask = 0;\n+\tconst int32_t BITS_HEX = 4;\n+\n+\tif (coremask == NULL)\n+\t\treturn -1;\n+\t/* Remove all blank characters ahead and after .\n+\t * Remove 0x/0X if exists.\n+\t */\n+\twhile (isblank(*coremask))\n+\t\tcoremask++;\n+\tif (coremask[0] == '0' && ((coremask[1] == 'x')\n+\t\t|| (coremask[1] == 'X')))\n+\t\tcoremask += 2;\n+\ti = strlen(coremask);\n+\twhile ((i > 0) && isblank(coremask[i - 1]))\n+\t\ti--;\n+\tif (i == 0)\n+\t\treturn -1;\n+\n+\tfor (i = i - 1; i >= 0 && idx < MAX_NUM_CORE; i--) {\n+\t\tc = coremask[i];\n+\t\tif (isxdigit(c) == 0) {\n+\t\t\t/* invalid characters */\n+\t\t\treturn -1;\n+\t\t}\n+\t\tval = xdigit2val(c);\n+\t\tfor (j = 0; j < BITS_HEX && idx < MAX_NUM_CORE; j++, idx++) {\n+\t\t\tif ((1 << j) & val) {\n+\t\t\t\tmask |= (1UL << idx);\n+\t\t\t\tcount++;\n+\t\t\t}\n+\t\t}\n+\t}\n+\tfor (; i >= 0; i--)\n+\t\tif (coremask[i] != '0')\n+\t\t\treturn -1;\n+\tif (count == 0)\n+\t\treturn -1;\n+\treturn mask;\n+}\n+\n+static struct option long_options[] = {\n+\t{\"workers\", required_argument, 0, 'w'},\n+\t{\"packets\", required_argument, 0, 'n'},\n+\t{\"atomic-flows\", required_argument, 0, 'f'},\n+\t{\"num_stages\", required_argument, 0, 's'},\n+\t{\"rx-mask\", required_argument, 0, 'r'},\n+\t{\"tx-mask\", required_argument, 0, 't'},\n+\t{\"sched-mask\", required_argument, 0, 'e'},\n+\t{\"cq-depth\", required_argument, 0, 'c'},\n+\t{\"work-cycles\", required_argument, 0, 'W'},\n+\t{\"queue-priority\", no_argument, 0, 'P'},\n+\t{\"parallel\", no_argument, 0, 'p'},\n+\t{\"ordered\", no_argument, 0, 'o'},\n+\t{\"quiet\", no_argument, 0, 'q'},\n+\t{\"dump\", no_argument, 0, 'D'},\n+\t{0, 0, 0, 0}\n+};\n+\n+static void\n+usage(void)\n+{\n+\tconst char *usage_str =\n+\t\t\"  Usage: eventdev_demo [options]\\n\"\n+\t\t\"  Options:\\n\"\n+\t\t\"  -n, --packets=N              Send N packets (default ~32M), 0 implies no limit\\n\"\n+\t\t\"  -f, --atomic-flows=N         Use N random flows from 1 to N (default 16)\\n\"\n+\t\t\"  -s, --num_stages=N           Use N atomic stages (default 1)\\n\"\n+\t\t\"  -r, --rx-mask=core mask      Run NIC rx on CPUs in core mask\\n\"\n+\t\t\"  -w, --worker-mask=core mask  Run worker on CPUs in core mask\\n\"\n+\t\t\"  -t, --tx-mask=core mask      Run NIC tx on CPUs in core mask\\n\"\n+\t\t\"  -e  --sched-mask=core mask   Run scheduler on CPUs in core mask\\n\"\n+\t\t\"  -c  --cq-depth=N             Worker CQ depth (default 16)\\n\"\n+\t\t\"  -W  --work-cycles=N          Worker cycles (default 0)\\n\"\n+\t\t\"  -P  --queue-priority         Enable scheduler queue prioritization\\n\"\n+\t\t\"  -o, --ordered                Use ordered scheduling\\n\"\n+\t\t\"  -p, --parallel               Use parallel scheduling\\n\"\n+\t\t\"  -q, --quiet                  Minimize printed output\\n\"\n+\t\t\"  -D, --dump                   Print detailed statistics before exit\"\n+\t\t\"\\n\";\n+\tfprintf(stderr, \"%s\", usage_str);\n+\texit(1);\n+}\n+\n+static void\n+parse_app_args(int argc, char **argv)\n+{\n+\t/* Parse cli options*/\n+\tint option_index;\n+\tint c;\n+\topterr = 0;\n+\tuint64_t rx_lcore_mask = 0;\n+\tuint64_t tx_lcore_mask = 0;\n+\tuint64_t sched_lcore_mask = 0;\n+\tuint64_t worker_lcore_mask = 0;\n+\tint i;\n+\n+\tfor (;;) {\n+\t\tc = getopt_long(argc, argv, \"r:t:e:c:w:n:f:s:poPqDW:\",\n+\t\t\t\tlong_options, &option_index);\n+\t\tif (c == -1)\n+\t\t\tbreak;\n+\n+\t\tint popcnt = 0;\n+\t\tswitch (c) {\n+\t\tcase 'n':\n+\t\t\tnum_packets = (unsigned long)atol(optarg);\n+\t\t\tbreak;\n+\t\tcase 'f':\n+\t\t\tnum_fids = (unsigned int)atoi(optarg);\n+\t\t\tbreak;\n+\t\tcase 's':\n+\t\t\tnum_stages = (unsigned int)atoi(optarg);\n+\t\t\tbreak;\n+\t\tcase 'c':\n+\t\t\tworker_cq_depth = (unsigned int)atoi(optarg);\n+\t\t\tbreak;\n+\t\tcase 'W':\n+\t\t\tworker_cycles = (unsigned int)atoi(optarg);\n+\t\t\tbreak;\n+\t\tcase 'P':\n+\t\t\tenable_queue_priorities = 1;\n+\t\t\tbreak;\n+\t\tcase 'o':\n+\t\t\tqueue_type = RTE_EVENT_QUEUE_CFG_ORDERED_ONLY;\n+\t\t\tbreak;\n+\t\tcase 'p':\n+\t\t\tqueue_type = RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY;\n+\t\t\tbreak;\n+\t\tcase 'q':\n+\t\t\tcdata.quiet = 1;\n+\t\t\tbreak;\n+\t\tcase 'D':\n+\t\t\tcdata.dump_dev = 1;\n+\t\t\tbreak;\n+\t\tcase 'w':\n+\t\t\tworker_lcore_mask = parse_coremask(optarg);\n+\t\t\tbreak;\n+\t\tcase 'r':\n+\t\t\trx_lcore_mask = parse_coremask(optarg);\n+\t\t\tpopcnt = __builtin_popcountll(rx_lcore_mask);\n+\t\t\tfdata->rx_single = (popcnt == 1);\n+\t\t\tbreak;\n+\t\tcase 't':\n+\t\t\ttx_lcore_mask = parse_coremask(optarg);\n+\t\t\tpopcnt = __builtin_popcountll(tx_lcore_mask);\n+\t\t\tfdata->tx_single = (popcnt == 1);\n+\t\t\tbreak;\n+\t\tcase 'e':\n+\t\t\tsched_lcore_mask = parse_coremask(optarg);\n+\t\t\tpopcnt = __builtin_popcountll(sched_lcore_mask);\n+\t\t\tfdata->sched_single = (popcnt == 1);\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tusage();\n+\t\t}\n+\t}\n+\n+\tif (worker_lcore_mask == 0 || rx_lcore_mask == 0 ||\n+\t    sched_lcore_mask == 0 || tx_lcore_mask == 0) {\n+\t\tprintf(\"Core part of pipeline was not assigned any cores. \"\n+\t\t\t\"This will stall the pipeline, please check core masks \"\n+\t\t\t\"(use -h for details on setting core masks):\\n\"\n+\t\t\t\"\\trx: %\"PRIu64\"\\n\\ttx: %\"PRIu64\"\\n\\tsched: %\"PRIu64\n+\t\t\t\"\\n\\tworkers: %\"PRIu64\"\\n\",\n+\t\t\trx_lcore_mask, tx_lcore_mask, sched_lcore_mask,\n+\t\t\tworker_lcore_mask);\n+\t\trte_exit(-1, \"Fix core masks\\n\");\n+\t}\n+\tif (num_stages == 0 || num_stages > MAX_NUM_STAGES)\n+\t\tusage();\n+\n+\tfor (i = 0; i < MAX_NUM_CORE; i++) {\n+\t\tfdata->rx_core[i] = !!(rx_lcore_mask & (1UL << i));\n+\t\tfdata->tx_core[i] = !!(tx_lcore_mask & (1UL << i));\n+\t\tfdata->sched_core[i] = !!(sched_lcore_mask & (1UL << i));\n+\t\tfdata->worker_core[i] = !!(worker_lcore_mask & (1UL << i));\n+\n+\t\tif (fdata->worker_core[i])\n+\t\t\tnum_workers++;\n+\t\tif (core_in_use(i))\n+\t\t\tactive_cores++;\n+\t}\n+}\n+\n+/*\n+ * Initializes a given port using global settings and with the RX buffers\n+ * coming from the mbuf_pool passed as a parameter.\n+ */\n+static inline int\n+port_init(uint8_t port, struct rte_mempool *mbuf_pool)\n+{\n+\tstatic const struct rte_eth_conf port_conf_default = {\n+\t\t.rxmode = {\n+\t\t\t.mq_mode = ETH_MQ_RX_RSS,\n+\t\t\t.max_rx_pkt_len = ETHER_MAX_LEN\n+\t\t},\n+\t\t.rx_adv_conf = {\n+\t\t\t.rss_conf = {\n+\t\t\t\t.rss_hf = ETH_RSS_IP |\n+\t\t\t\t\t  ETH_RSS_TCP |\n+\t\t\t\t\t  ETH_RSS_UDP,\n+\t\t\t}\n+\t\t}\n+\t};\n+\tconst uint16_t rx_rings = 1, tx_rings = 1;\n+\tconst uint16_t rx_ring_size = 512, tx_ring_size = 512;\n+\tstruct rte_eth_conf port_conf = port_conf_default;\n+\tint retval;\n+\tuint16_t q;\n+\n+\tif (port >= rte_eth_dev_count())\n+\t\treturn -1;\n+\n+\t/* Configure the Ethernet device. */\n+\tretval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);\n+\tif (retval != 0)\n+\t\treturn retval;\n+\n+\t/* Allocate and set up 1 RX queue per Ethernet port. */\n+\tfor (q = 0; q < rx_rings; q++) {\n+\t\tretval = rte_eth_rx_queue_setup(port, q, rx_ring_size,\n+\t\t\t\trte_eth_dev_socket_id(port), NULL, mbuf_pool);\n+\t\tif (retval < 0)\n+\t\t\treturn retval;\n+\t}\n+\n+\t/* Allocate and set up 1 TX queue per Ethernet port. */\n+\tfor (q = 0; q < tx_rings; q++) {\n+\t\tretval = rte_eth_tx_queue_setup(port, q, tx_ring_size,\n+\t\t\t\trte_eth_dev_socket_id(port), NULL);\n+\t\tif (retval < 0)\n+\t\t\treturn retval;\n+\t}\n+\n+\t/* Start the Ethernet port. */\n+\tretval = rte_eth_dev_start(port);\n+\tif (retval < 0)\n+\t\treturn retval;\n+\n+\t/* Display the port MAC address. */\n+\tstruct ether_addr addr;\n+\trte_eth_macaddr_get(port, &addr);\n+\tprintf(\"Port %u MAC: %02\" PRIx8 \" %02\" PRIx8 \" %02\" PRIx8\n+\t\t\t   \" %02\" PRIx8 \" %02\" PRIx8 \" %02\" PRIx8 \"\\n\",\n+\t\t\t(unsigned int)port,\n+\t\t\taddr.addr_bytes[0], addr.addr_bytes[1],\n+\t\t\taddr.addr_bytes[2], addr.addr_bytes[3],\n+\t\t\taddr.addr_bytes[4], addr.addr_bytes[5]);\n+\n+\t/* Enable RX in promiscuous mode for the Ethernet device. */\n+\trte_eth_promiscuous_enable(port);\n+\n+\treturn 0;\n+}\n+\n+static int\n+init_ports(unsigned int num_ports)\n+{\n+\tuint8_t portid;\n+\tunsigned int i;\n+\n+\tstruct rte_mempool *mp = rte_pktmbuf_pool_create(\"packet_pool\",\n+\t\t\t/* mbufs */ 16384 * num_ports,\n+\t\t\t/* cache_size */ 512,\n+\t\t\t/* priv_size*/ 0,\n+\t\t\t/* data_room_size */ RTE_MBUF_DEFAULT_BUF_SIZE,\n+\t\t\trte_socket_id());\n+\n+\tfor (portid = 0; portid < num_ports; portid++)\n+\t\tif (port_init(portid, mp) != 0)\n+\t\t\trte_exit(EXIT_FAILURE, \"Cannot init port %\"PRIu8 \"\\n\",\n+\t\t\t\t\tportid);\n+\n+\tfor (i = 0; i < num_ports; i++) {\n+\t\tvoid *userdata = (void *)(uintptr_t) i;\n+\t\tfdata->tx_buf[i] =\n+\t\t\trte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0);\n+\t\tif (fdata->tx_buf[i] == NULL)\n+\t\t\trte_panic(\"Out of memory\\n\");\n+\t\trte_eth_tx_buffer_init(fdata->tx_buf[i], 32);\n+\t\trte_eth_tx_buffer_set_err_callback(fdata->tx_buf[i],\n+\t\t\t\t\t\t   eth_tx_buffer_retry,\n+\t\t\t\t\t\t   userdata);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+struct port_link {\n+\tuint8_t queue_id;\n+\tuint8_t priority;\n+};\n+\n+static int\n+setup_eventdev(struct prod_data *prod_data,\n+\t\tstruct cons_data *cons_data,\n+\t\tstruct worker_data *worker_data)\n+{\n+\tconst uint8_t dev_id = 0;\n+\t/* +1 stages is for a SINGLE_LINK TX stage */\n+\tconst uint8_t nb_queues = num_stages + 1;\n+\t/* + 2 is one port for producer and one for consumer */\n+\tconst uint8_t nb_ports = num_workers + 2;\n+\tstruct rte_event_dev_config config = {\n+\t\t\t.nb_event_queues = nb_queues,\n+\t\t\t.nb_event_ports = nb_ports,\n+\t\t\t.nb_events_limit  = 4096,\n+\t\t\t.nb_event_queue_flows = 1024,\n+\t\t\t.nb_event_port_dequeue_depth = 128,\n+\t\t\t.nb_event_port_enqueue_depth = 128,\n+\t};\n+\tstruct rte_event_port_conf wkr_p_conf = {\n+\t\t\t.dequeue_depth = worker_cq_depth,\n+\t\t\t.enqueue_depth = 64,\n+\t\t\t.new_event_threshold = 4096,\n+\t};\n+\tstruct rte_event_queue_conf wkr_q_conf = {\n+\t\t\t.event_queue_cfg = queue_type,\n+\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,\n+\t\t\t.nb_atomic_flows = 1024,\n+\t\t\t.nb_atomic_order_sequences = 1024,\n+\t};\n+\tstruct rte_event_port_conf tx_p_conf = {\n+\t\t\t.dequeue_depth = 128,\n+\t\t\t.enqueue_depth = 128,\n+\t\t\t.new_event_threshold = 4096,\n+\t};\n+\tconst struct rte_event_queue_conf tx_q_conf = {\n+\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST,\n+\t\t\t.event_queue_cfg =\n+\t\t\t\t\tRTE_EVENT_QUEUE_CFG_ATOMIC_ONLY |\n+\t\t\t\t\tRTE_EVENT_QUEUE_CFG_SINGLE_LINK,\n+\t\t\t.nb_atomic_flows = 1024,\n+\t\t\t.nb_atomic_order_sequences = 1024,\n+\t};\n+\n+\tstruct port_link worker_queues[MAX_NUM_STAGES];\n+\tstruct port_link tx_queue;\n+\tunsigned int i;\n+\n+\tint ret, ndev = rte_event_dev_count();\n+\tif (ndev < 1) {\n+\t\tprintf(\"%d: No Eventdev Devices Found\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tstruct rte_event_dev_info dev_info;\n+\tret = rte_event_dev_info_get(dev_id, &dev_info);\n+\tprintf(\"\\tEventdev %d: %s\\n\", dev_id, dev_info.driver_name);\n+\n+\tif (dev_info.max_event_port_dequeue_depth <\n+\t\t\tconfig.nb_event_port_dequeue_depth)\n+\t\tconfig.nb_event_port_dequeue_depth =\n+\t\t\t\tdev_info.max_event_port_dequeue_depth;\n+\tif (dev_info.max_event_port_enqueue_depth <\n+\t\t\tconfig.nb_event_port_enqueue_depth)\n+\t\tconfig.nb_event_port_enqueue_depth =\n+\t\t\t\tdev_info.max_event_port_enqueue_depth;\n+\n+\tret = rte_event_dev_configure(dev_id, &config);\n+\tif (ret < 0) {\n+\t\tprintf(\"%d: Error configuring device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* Q creation - one load balanced per pipeline stage*/\n+\tprintf(\"  Stages:\\n\");\n+\tfor (i = 0; i < num_stages; i++) {\n+\t\tif (rte_event_queue_setup(dev_id, i, &wkr_q_conf) < 0) {\n+\t\t\tprintf(\"%d: error creating qid %d\\n\", __LINE__, i);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tqid[i] = i;\n+\t\tnext_qid[i] = i+1;\n+\t\tworker_queues[i].queue_id = i;\n+\t\tif (enable_queue_priorities) {\n+\t\t\t/* calculate priority stepping for each stage, leaving\n+\t\t\t * headroom of 1 for the SINGLE_LINK TX below\n+\t\t\t */\n+\t\t\tconst uint32_t prio_delta =\n+\t\t\t\t(RTE_EVENT_DEV_PRIORITY_LOWEST-1) /  nb_queues;\n+\n+\t\t\t/* higher priority for queues closer to tx */\n+\t\t\twkr_q_conf.priority =\n+\t\t\t\tRTE_EVENT_DEV_PRIORITY_LOWEST - prio_delta * i;\n+\t\t}\n+\n+\t\tconst char *type_str = \"Atomic\";\n+\t\tswitch (wkr_q_conf.event_queue_cfg) {\n+\t\tcase RTE_EVENT_QUEUE_CFG_ORDERED_ONLY:\n+\t\t\ttype_str = \"Ordered\";\n+\t\t\tbreak;\n+\t\tcase RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY:\n+\t\t\ttype_str = \"Parallel\";\n+\t\t\tbreak;\n+\t\t}\n+\t\tprintf(\"\\tStage %d, Type %s\\tPriority = %d\\n\", i, type_str,\n+\t\t\t\twkr_q_conf.priority);\n+\t}\n+\tprintf(\"\\n\");\n+\n+\t/* final queue for sending to TX core */\n+\tif (rte_event_queue_setup(dev_id, i, &tx_q_conf) < 0) {\n+\t\tprintf(\"%d: error creating qid %d\\n\", __LINE__, i);\n+\t\treturn -1;\n+\t}\n+\ttx_queue.queue_id = i;\n+\ttx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST;\n+\n+\tif (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)\n+\t\twkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;\n+\tif (wkr_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)\n+\t\twkr_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;\n+\n+\t/* set up one port per worker, linking to all stage queues */\n+\tfor (i = 0; i < num_workers; i++) {\n+\t\tstruct worker_data *w = &worker_data[i];\n+\t\tw->dev_id = dev_id;\n+\t\tif (rte_event_port_setup(dev_id, i, &wkr_p_conf) < 0) {\n+\t\t\tprintf(\"Error setting up port %d\\n\", i);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\t\tuint32_t s;\n+\t\tfor (s = 0; s < num_stages; s++) {\n+\t\t\tif (rte_event_port_link(dev_id, i,\n+\t\t\t\t\t\t&worker_queues[s].queue_id,\n+\t\t\t\t\t\t&worker_queues[s].priority,\n+\t\t\t\t\t\t1) != 1) {\n+\t\t\t\tprintf(\"%d: error creating link for port %d\\n\",\n+\t\t\t\t\t\t__LINE__, i);\n+\t\t\t\treturn -1;\n+\t\t\t}\n+\t\t}\n+\t\tw->port_id = i;\n+\t}\n+\n+\tif (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)\n+\t\ttx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;\n+\tif (tx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)\n+\t\ttx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;\n+\n+\t/* port for consumer, linked to TX queue */\n+\tif (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) {\n+\t\tprintf(\"Error setting up port %d\\n\", i);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_port_link(dev_id, i, &tx_queue.queue_id,\n+\t\t\t\t&tx_queue.priority, 1) != 1) {\n+\t\tprintf(\"%d: error creating link for port %d\\n\",\n+\t\t\t\t__LINE__, i);\n+\t\treturn -1;\n+\t}\n+\t/* port for producer, no links */\n+\tstruct rte_event_port_conf rx_p_conf = {\n+\t\t\t.dequeue_depth = 8,\n+\t\t\t.enqueue_depth = 8,\n+\t\t\t.new_event_threshold = 1200,\n+\t};\n+\n+\tif (rx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth)\n+\t\trx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth;\n+\tif (rx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth)\n+\t\trx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth;\n+\n+\tif (rte_event_port_setup(dev_id, i + 1, &rx_p_conf) < 0) {\n+\t\tprintf(\"Error setting up port %d\\n\", i);\n+\t\treturn -1;\n+\t}\n+\n+\t*prod_data = (struct prod_data){.dev_id = dev_id,\n+\t\t\t\t\t.port_id = i + 1,\n+\t\t\t\t\t.qid = qid[0] };\n+\t*cons_data = (struct cons_data){.dev_id = dev_id,\n+\t\t\t\t\t.port_id = i };\n+\n+\tif (rte_event_dev_start(dev_id) < 0) {\n+\t\tprintf(\"Error starting eventdev\\n\");\n+\t\treturn -1;\n+\t}\n+\n+\treturn dev_id;\n+}\n+\n+static void\n+signal_handler(int signum)\n+{\n+\tif (fdata->done)\n+\t\trte_exit(1, \"Exiting on signal %d\\n\", signum);\n+\tif (signum == SIGINT || signum == SIGTERM) {\n+\t\tprintf(\"\\n\\nSignal %d received, preparing to exit...\\n\",\n+\t\t\t\tsignum);\n+\t\tfdata->done = 1;\n+\t}\n+\tif (signum == SIGTSTP)\n+\t\trte_event_dev_dump(0, stdout);\n+}\n+\n+int\n+main(int argc, char **argv)\n+{\n+\tstruct worker_data *worker_data;\n+\tunsigned int num_ports;\n+\tint lcore_id;\n+\tint err;\n+\n+\tsignal(SIGINT, signal_handler);\n+\tsignal(SIGTERM, signal_handler);\n+\tsignal(SIGTSTP, signal_handler);\n+\n+\terr = rte_eal_init(argc, argv);\n+\tif (err < 0)\n+\t\trte_panic(\"Invalid EAL arguments\\n\");\n+\n+\targc -= err;\n+\targv += err;\n+\n+\tfdata = rte_malloc(NULL, sizeof(struct fastpath_data), 0);\n+\tif (fdata == NULL)\n+\t\trte_panic(\"Out of memory\\n\");\n+\n+\t/* Parse cli options*/\n+\tparse_app_args(argc, argv);\n+\n+\tnum_ports = rte_eth_dev_count();\n+\tif (num_ports == 0)\n+\t\trte_panic(\"No ethernet ports found\\n\");\n+\n+\tconst unsigned int cores_needed = active_cores;\n+\n+\tif (!cdata.quiet) {\n+\t\tprintf(\"  Config:\\n\");\n+\t\tprintf(\"\\tports: %u\\n\", num_ports);\n+\t\tprintf(\"\\tworkers: %u\\n\", num_workers);\n+\t\tprintf(\"\\tpackets: %lu\\n\", num_packets);\n+\t\tprintf(\"\\tQueue-prio: %u\\n\", enable_queue_priorities);\n+\t\tif (queue_type == RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)\n+\t\t\tprintf(\"\\tqid0 type: ordered\\n\");\n+\t\tif (queue_type == RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)\n+\t\t\tprintf(\"\\tqid0 type: atomic\\n\");\n+\t\tprintf(\"\\tCores available: %u\\n\", rte_lcore_count());\n+\t\tprintf(\"\\tCores used: %u\\n\", cores_needed);\n+\t}\n+\n+\tif (rte_lcore_count() < cores_needed)\n+\t\trte_panic(\"Too few cores (%d < %d)\\n\", rte_lcore_count(),\n+\t\t\t\tcores_needed);\n+\n+\tconst unsigned int ndevs = rte_event_dev_count();\n+\tif (ndevs == 0)\n+\t\trte_panic(\"No dev_id devs found. Pasl in a --vdev eventdev.\\n\");\n+\tif (ndevs > 1)\n+\t\tfprintf(stderr, \"Warning: More than one eventdev, using idx 0\");\n+\n+\tworker_data = rte_calloc(0, num_workers, sizeof(worker_data[0]), 0);\n+\tif (worker_data == NULL)\n+\t\trte_panic(\"rte_calloc failed\\n\");\n+\n+\tint dev_id = setup_eventdev(&prod_data, &cons_data, worker_data);\n+\tif (dev_id < 0)\n+\t\trte_exit(EXIT_FAILURE, \"Error setting up eventdev\\n\");\n+\n+\tprod_data.num_nic_ports = num_ports;\n+\tinit_ports(num_ports);\n+\n+\tint worker_idx = 0;\n+\tRTE_LCORE_FOREACH_SLAVE(lcore_id) {\n+\t\tif (lcore_id >= MAX_NUM_CORE)\n+\t\t\tbreak;\n+\n+\t\tif (!fdata->rx_core[lcore_id] &&\n+\t\t\t!fdata->worker_core[lcore_id] &&\n+\t\t\t!fdata->tx_core[lcore_id] &&\n+\t\t\t!fdata->sched_core[lcore_id])\n+\t\t\tcontinue;\n+\n+\t\tif (fdata->rx_core[lcore_id])\n+\t\t\tprintf(\n+\t\t\t\t\"[%s()] lcore %d executing NIC Rx, and using eventdev port %u\\n\",\n+\t\t\t\t__func__, lcore_id, prod_data.port_id);\n+\n+\t\tif (fdata->tx_core[lcore_id])\n+\t\t\tprintf(\n+\t\t\t\t\"[%s()] lcore %d executing NIC Tx, and using eventdev port %u\\n\",\n+\t\t\t\t__func__, lcore_id, cons_data.port_id);\n+\n+\t\tif (fdata->sched_core[lcore_id])\n+\t\t\tprintf(\"[%s()] lcore %d executing scheduler\\n\",\n+\t\t\t\t\t__func__, lcore_id);\n+\n+\t\tif (fdata->worker_core[lcore_id])\n+\t\t\tprintf(\n+\t\t\t\t\"[%s()] lcore %d executing worker, using eventdev port %u\\n\",\n+\t\t\t\t__func__, lcore_id,\n+\t\t\t\tworker_data[worker_idx].port_id);\n+\n+\t\terr = rte_eal_remote_launch(worker, &worker_data[worker_idx],\n+\t\t\t\t\t    lcore_id);\n+\t\tif (err) {\n+\t\t\trte_panic(\"Failed to launch worker on core %d\\n\",\n+\t\t\t\t\tlcore_id);\n+\t\t\tcontinue;\n+\t\t}\n+\t\tif (fdata->worker_core[lcore_id])\n+\t\t\tworker_idx++;\n+\t}\n+\n+\tlcore_id = rte_lcore_id();\n+\n+\tif (core_in_use(lcore_id))\n+\t\tworker(&worker_data[worker_idx++]);\n+\n+\trte_eal_mp_wait_lcore();\n+\n+\tif (cdata.dump_dev)\n+\t\trte_event_dev_dump(dev_id, stdout);\n+\n+\tif (!cdata.quiet) {\n+\t\tprintf(\"\\nPort Workload distribution:\\n\");\n+\t\tuint32_t i;\n+\t\tuint64_t tot_pkts = 0;\n+\t\tuint64_t pkts_per_wkr[RTE_MAX_LCORE] = {0};\n+\t\tfor (i = 0; i < num_workers; i++) {\n+\t\t\tchar statname[64];\n+\t\t\tuint64_t retval;\n+\n+\t\t\tsnprintf(statname, sizeof(statname), \"port_%u_rx\",\n+\t\t\t\t\tworker_data[i].port_id);\n+\t\t\tretval = rte_event_dev_xstats_by_name_get(\n+\t\t\t\t\tdev_id, statname, NULL);\n+\t\t\tif (retval != (uint64_t)-ENOTSUP) {\n+\t\t\t\tpkts_per_wkr[i] =  retval;\n+\t\t\t\ttot_pkts += pkts_per_wkr[i];\n+\t\t\t}\n+\t\t}\n+\t\tfor (i = 0; i < num_workers; i++) {\n+\t\t\tfloat pc = pkts_per_wkr[i]  * 100 /\n+\t\t\t\t((float)tot_pkts);\n+\t\t\tprintf(\"worker %i :\\t%.1f %% (%\"PRIu64\" pkts)\\n\",\n+\t\t\t\t\ti, pc, pkts_per_wkr[i]);\n+\t\t}\n+\n+\t}\n+\n+\treturn 0;\n+}\n",
    "prefixes": [
        "dpdk-dev",
        "v6",
        "1/3"
    ]
}