get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/20114/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 20114,
    "url": "http://patches.dpdk.org/api/patches/20114/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1485879273-86228-16-git-send-email-harry.van.haaren@intel.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1485879273-86228-16-git-send-email-harry.van.haaren@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1485879273-86228-16-git-send-email-harry.van.haaren@intel.com",
    "date": "2017-01-31T16:14:33",
    "name": "[dpdk-dev,v2,15/15] app/test: add unit tests for SW eventdev driver",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "a62d1ca92512d04a1aed4caa82c218387dd3629e",
    "submitter": {
        "id": 317,
        "url": "http://patches.dpdk.org/api/people/317/?format=api",
        "name": "Van Haaren, Harry",
        "email": "harry.van.haaren@intel.com"
    },
    "delegate": {
        "id": 10,
        "url": "http://patches.dpdk.org/api/users/10/?format=api",
        "username": "bruce",
        "first_name": "Bruce",
        "last_name": "Richardson",
        "email": "bruce.richardson@intel.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1485879273-86228-16-git-send-email-harry.van.haaren@intel.com/mbox/",
    "series": [],
    "comments": "http://patches.dpdk.org/api/patches/20114/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/20114/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id E5D21F972;\n\tTue, 31 Jan 2017 17:15:46 +0100 (CET)",
            "from mga09.intel.com (mga09.intel.com [134.134.136.24])\n\tby dpdk.org (Postfix) with ESMTP id 67D632B98\n\tfor <dev@dpdk.org>; Tue, 31 Jan 2017 17:15:08 +0100 (CET)",
            "from fmsmga003.fm.intel.com ([10.253.24.29])\n\tby orsmga102.jf.intel.com with ESMTP; 31 Jan 2017 08:15:07 -0800",
            "from silpixa00398672.ir.intel.com ([10.237.223.128])\n\tby FMSMGA003.fm.intel.com with ESMTP; 31 Jan 2017 08:15:04 -0800"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.33,315,1477983600\"; d=\"scan'208\";a=\"815468415\"",
        "From": "Harry van Haaren <harry.van.haaren@intel.com>",
        "To": "dev@dpdk.org",
        "Cc": "jerin.jacob@caviumnetworks.com,\n\tBruce Richardson <bruce.richardson@intel.com>,\n\tDavid Hunt <david.hunt@intel.com>,\n\tHarry van Haaren <harry.van.haaren@intel.com>",
        "Date": "Tue, 31 Jan 2017 16:14:33 +0000",
        "Message-Id": "<1485879273-86228-16-git-send-email-harry.van.haaren@intel.com>",
        "X-Mailer": "git-send-email 2.7.4",
        "In-Reply-To": "<1485879273-86228-1-git-send-email-harry.van.haaren@intel.com>",
        "References": "<1484580885-148524-1-git-send-email-harry.van.haaren@intel.com>\n\t<1485879273-86228-1-git-send-email-harry.van.haaren@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v2 15/15] app/test: add unit tests for SW\n\teventdev driver",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Bruce Richardson <bruce.richardson@intel.com>\n\nSince the sw driver is a standalone lookaside device that has no HW\nrequirements, we can provide a set of unit tests that test its\nfunctionality across the different queue types and with different input\nscenarios.\n\nThis also adds the tests to be automatically run by autotest.py\n\nSigned-off-by: Bruce Richardson <bruce.richardson@intel.com>\nSigned-off-by: David Hunt <david.hunt@intel.com>\nSigned-off-by: Harry van Haaren <harry.van.haaren@intel.com>\n---\n app/test/Makefile           |    5 +-\n app/test/autotest_data.py   |   26 +\n app/test/test_sw_eventdev.c | 2071 +++++++++++++++++++++++++++++++++++++++++++\n 3 files changed, 2101 insertions(+), 1 deletion(-)\n create mode 100644 app/test/test_sw_eventdev.c",
    "diff": "diff --git a/app/test/Makefile b/app/test/Makefile\nindex e28c079..1770c09 100644\n--- a/app/test/Makefile\n+++ b/app/test/Makefile\n@@ -197,7 +197,10 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c\n SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c\n SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c\n \n-SRCS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += test_eventdev.c\n+ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)\n+SRCS-y += test_eventdev.c\n+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += test_sw_eventdev.c\n+endif\n \n SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c\n \ndiff --git a/app/test/autotest_data.py b/app/test/autotest_data.py\nindex 0cd598b..165ed6c 100644\n--- a/app/test/autotest_data.py\n+++ b/app/test/autotest_data.py\n@@ -346,6 +346,32 @@ def per_sockets(num):\n non_parallel_test_group_list = [\n \n     {\n+        \"Prefix\":    \"eventdev\",\n+        \"Memory\":    \"512\",\n+        \"Tests\":\n+        [\n+            {\n+                \"Name\":    \"Eventdev common autotest\",\n+                \"Command\": \"eventdev_common_autotest\",\n+                \"Func\":    default_autotest,\n+                \"Report\":  None,\n+            },\n+        ]\n+    },\n+    {\n+        \"Prefix\":    \"eventdev_sw\",\n+        \"Memory\":    \"512\",\n+        \"Tests\":\n+        [\n+            {\n+                \"Name\":    \"Eventdev sw autotest\",\n+                \"Command\": \"eventdev_sw_autotest\",\n+                \"Func\":    default_autotest,\n+                \"Report\":  None,\n+            },\n+        ]\n+    },\n+    {\n         \"Prefix\":    \"kni\",\n         \"Memory\":    \"512\",\n         \"Tests\":\ndiff --git a/app/test/test_sw_eventdev.c b/app/test/test_sw_eventdev.c\nnew file mode 100644\nindex 0000000..6322f36\n--- /dev/null\n+++ b/app/test/test_sw_eventdev.c\n@@ -0,0 +1,2071 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2016-2017 Intel Corporation. All rights reserved.\n+ *   All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <stdio.h>\n+#include <string.h>\n+#include <stdint.h>\n+#include <errno.h>\n+#include <unistd.h>\n+#include <sys/queue.h>\n+\n+#include <rte_memory.h>\n+#include <rte_memzone.h>\n+#include <rte_launch.h>\n+#include <rte_eal.h>\n+#include <rte_per_lcore.h>\n+#include <rte_lcore.h>\n+#include <rte_debug.h>\n+#include <rte_ethdev.h>\n+#include <rte_cycles.h>\n+\n+#include <rte_eventdev.h>\n+#include \"test.h\"\n+\n+#define MAX_PORTS 16\n+#define MAX_QIDS 16\n+#define NUM_PACKETS (1<<18)\n+\n+static int evdev;\n+\n+struct test {\n+\tstruct rte_mempool *mbuf_pool;\n+\tuint8_t port[MAX_PORTS];\n+\tuint8_t qid[MAX_QIDS];\n+\tint nb_qids;\n+};\n+\n+static struct rte_event release_ev = {.op = RTE_EVENT_OP_RELEASE };\n+\n+static inline struct rte_mbuf *\n+rte_gen_arp(int portid, struct rte_mempool *mp)\n+{\n+\t/*\n+\t* len = 14 + 46\n+\t* ARP, Request who-has 10.0.0.1 tell 10.0.0.2, length 46\n+\t*/\n+\tstatic const uint8_t arp_request[] = {\n+\t\t/*0x0000:*/ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xec, 0xa8,\n+\t\t0x6b, 0xfd, 0x02, 0x29, 0x08, 0x06, 0x00, 0x01,\n+\t\t/*0x0010:*/ 0x08, 0x00, 0x06, 0x04, 0x00, 0x01, 0xec, 0xa8,\n+\t\t0x6b, 0xfd, 0x02, 0x29, 0x0a, 0x00, 0x00, 0x01,\n+\t\t/*0x0020:*/ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0a, 0x00,\n+\t\t0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,\n+\t\t/*0x0030:*/ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,\n+\t\t0x00, 0x00, 0x00, 0x00\n+\t};\n+\tstruct rte_mbuf *m;\n+\tint pkt_len = sizeof(arp_request) - 1;\n+\n+\tm = rte_pktmbuf_alloc(mp);\n+\tif (!m)\n+\t\treturn 0;\n+\n+\tmemcpy((void *)((uintptr_t)m->buf_addr + m->data_off),\n+\t\tarp_request, pkt_len);\n+\trte_pktmbuf_pkt_len(m) = pkt_len;\n+\trte_pktmbuf_data_len(m) = pkt_len;\n+\n+\tRTE_SET_USED(portid);\n+\n+\treturn m;\n+}\n+\n+/* initialization and config */\n+static inline int\n+init(struct test *t, int nb_queues, int nb_ports)\n+{\n+\tstruct rte_event_dev_config config = {\n+\t\t\t.nb_event_queues = nb_queues,\n+\t\t\t.nb_event_ports = nb_ports,\n+\t\t\t.nb_event_queue_flows = 1024,\n+\t\t\t.nb_events_limit = 4096,\n+\t\t\t.nb_event_port_dequeue_depth = 128,\n+\t\t\t.nb_event_port_enqueue_depth = 128,\n+\t};\n+\tint ret;\n+\n+\tvoid *temp = t->mbuf_pool; /* save and restore mbuf pool */\n+\n+\tmemset(t, 0, sizeof(*t));\n+\tt->mbuf_pool = temp;\n+\n+\tret = rte_event_dev_configure(evdev, &config);\n+\tif (ret < 0)\n+\t\tprintf(\"%d: Error configuring device\\n\", __LINE__);\n+\treturn ret;\n+};\n+\n+static inline int\n+create_ports(struct test *t, int num_ports)\n+{\n+\tint i;\n+\tstatic const struct rte_event_port_conf conf = {\n+\t\t\t.new_event_threshold = 1024,\n+\t\t\t.dequeue_depth = 32,\n+\t\t\t.enqueue_depth = 64,\n+\t};\n+\tif (num_ports > MAX_PORTS)\n+\t\treturn -1;\n+\n+\tfor (i = 0; i < num_ports; i++) {\n+\t\tif (rte_event_port_setup(evdev, i, &conf) < 0) {\n+\t\t\tprintf(\"Error setting up port %d\\n\", i);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tt->port[i] = i;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static inline int\n+create_lb_qids(struct test *t, int num_qids, uint32_t flags)\n+{\n+\tint i;\n+\n+\t/* Q creation */\n+\tconst struct rte_event_queue_conf conf = {\n+\t\t\t.event_queue_cfg = flags,\n+\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,\n+\t\t\t.nb_atomic_flows = 1024,\n+\t\t\t.nb_atomic_order_sequences = 1024,\n+\t};\n+\n+\tfor (i = t->nb_qids; i < t->nb_qids + num_qids; i++) {\n+\t\tif (rte_event_queue_setup(evdev, i, &conf) < 0) {\n+\t\t\tprintf(\"%d: error creating qid %d\\n\", __LINE__, i);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tt->qid[i] = i;\n+\t}\n+\tt->nb_qids += num_qids;\n+\tif (t->nb_qids > MAX_QIDS)\n+\t\treturn -1;\n+\n+\treturn 0;\n+}\n+\n+static inline int\n+create_atomic_qids(struct test *t, int num_qids)\n+{\n+\treturn create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY);\n+}\n+\n+static inline int\n+create_ordered_qids(struct test *t, int num_qids)\n+{\n+\treturn create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_ORDERED_ONLY);\n+}\n+\n+\n+static inline int\n+create_unordered_qids(struct test *t, int num_qids)\n+{\n+\treturn create_lb_qids(t, num_qids, RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY);\n+}\n+\n+static inline int\n+create_directed_qids(struct test *t, int num_qids, const uint8_t ports[])\n+{\n+\tint i;\n+\n+\t/* Q creation */\n+\tstatic const struct rte_event_queue_conf conf = {\n+\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,\n+\t\t\t.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK,\n+\t\t\t.nb_atomic_flows = 1024,\n+\t\t\t.nb_atomic_order_sequences = 1024,\n+\t};\n+\n+\tfor (i = t->nb_qids; i < t->nb_qids + num_qids; i++) {\n+\t\tif (rte_event_queue_setup(evdev, i, &conf) < 0) {\n+\t\t\tprintf(\"%d: error creating qid %d\\n\", __LINE__, i);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tt->qid[i] = i;\n+\n+\t\tif (rte_event_port_link(evdev, ports[i - t->nb_qids],\n+\t\t\t\t&t->qid[i], NULL, 1) != 1) {\n+\t\t\tprintf(\"%d: error creating link for qid %d\\n\",\n+\t\t\t\t\t__LINE__, i);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\tt->nb_qids += num_qids;\n+\tif (t->nb_qids > MAX_QIDS)\n+\t\treturn -1;\n+\n+\treturn 0;\n+}\n+\n+/* destruction */\n+static inline int\n+cleanup(struct test *t __rte_unused)\n+{\n+\trte_event_dev_stop(evdev);\n+\trte_event_dev_close(evdev);\n+\treturn 0;\n+};\n+\n+struct test_event_dev_stats {\n+\tuint64_t rx_pkts;       /**< Total packets received */\n+\tuint64_t rx_dropped;    /**< Total packets dropped (Eg Invalid QID) */\n+\tuint64_t tx_pkts;       /**< Total packets transmitted */\n+\n+\t/** Packets received on this port */\n+\tuint64_t port_rx_pkts[MAX_PORTS];\n+\t/** Packets dropped on this port */\n+\tuint64_t port_rx_dropped[MAX_PORTS];\n+\t/** Packets inflight on this port */\n+\tuint64_t port_inflight[MAX_PORTS];\n+\t/** Packets transmitted on this port */\n+\tuint64_t port_tx_pkts[MAX_PORTS];\n+\t/** Packets received on this qid */\n+\tuint64_t qid_rx_pkts[MAX_QIDS];\n+\t/** Packets dropped on this qid */\n+\tuint64_t qid_rx_dropped[MAX_QIDS];\n+\t/** Packets transmitted on this qid */\n+\tuint64_t qid_tx_pkts[MAX_QIDS];\n+};\n+\n+static inline int\n+test_event_dev_stats_get(int dev_id, struct test_event_dev_stats *stats)\n+{\n+\tstatic uint32_t i;\n+\tstatic uint32_t total_ids[3]; /* rx, tx and drop */\n+\tstatic uint32_t port_rx_pkts_ids[MAX_PORTS];\n+\tstatic uint32_t port_rx_dropped_ids[MAX_PORTS];\n+\tstatic uint32_t port_inflight_ids[MAX_PORTS];\n+\tstatic uint32_t port_tx_pkts_ids[MAX_PORTS];\n+\tstatic uint32_t qid_rx_pkts_ids[MAX_QIDS];\n+\tstatic uint32_t qid_rx_dropped_ids[MAX_QIDS];\n+\tstatic uint32_t qid_tx_pkts_ids[MAX_QIDS];\n+\n+\n+\tstats->rx_pkts = rte_event_dev_xstats_by_name_get(dev_id,\n+\t\t\t\"dev_rx\", &total_ids[0]);\n+\tstats->rx_dropped = rte_event_dev_xstats_by_name_get(dev_id,\n+\t\t\t\"dev_drop\", &total_ids[1]);\n+\tstats->tx_pkts = rte_event_dev_xstats_by_name_get(dev_id,\n+\t\t\t\"dev_tx\", &total_ids[2]);\n+\tfor (i = 0; i < MAX_PORTS; i++) {\n+\t\tchar name[32];\n+\t\tsnprintf(name, sizeof(name), \"port_%u_rx\", i);\n+\t\tstats->port_rx_pkts[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &port_rx_pkts_ids[i]);\n+\t\tsnprintf(name, sizeof(name), \"port_%u_drop\", i);\n+\t\tstats->port_rx_dropped[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &port_rx_dropped_ids[i]);\n+\t\tsnprintf(name, sizeof(name), \"port_%u_inflight\", i);\n+\t\tstats->port_inflight[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &port_inflight_ids[i]);\n+\t\tsnprintf(name, sizeof(name), \"port_%u_tx\", i);\n+\t\tstats->port_tx_pkts[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &port_tx_pkts_ids[i]);\n+\t}\n+\tfor (i = 0; i < MAX_QIDS; i++) {\n+\t\tchar name[32];\n+\t\tsnprintf(name, sizeof(name), \"qid_%u_rx\", i);\n+\t\tstats->qid_rx_pkts[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &qid_rx_pkts_ids[i]);\n+\t\tsnprintf(name, sizeof(name), \"qid_%u_drop\", i);\n+\t\tstats->qid_rx_dropped[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &qid_rx_dropped_ids[i]);\n+\t\tsnprintf(name, sizeof(name), \"qid_%u_tx\", i);\n+\t\tstats->qid_tx_pkts[i] = rte_event_dev_xstats_by_name_get(\n+\t\t\t\tdev_id, name, &qid_tx_pkts_ids[i]);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+/* run_prio_packet_test\n+ * This performs a basic packet priority check on the test instance passed in.\n+ * It is factored out of the main priority tests as the same tests must be\n+ * performed to ensure prioritization of each type of QID.\n+ *\n+ * Requirements:\n+ *  - An initialized test structure, including mempool\n+ *  - t->port[0] is initialized for both Enq / Deq of packets to the QID\n+ *  - t->qid[0] is the QID to be tested\n+ *  - if LB QID, the CQ must be mapped to the QID.\n+ */\n+static int\n+run_prio_packet_test(struct test *t)\n+{\n+\tint err;\n+\tconst uint32_t MAGIC_SEQN[] = {4711, 1234};\n+\tconst uint32_t PRIORITY[] = {3, 0};\n+\tunsigned int i;\n+\tfor (i = 0; i < RTE_DIM(MAGIC_SEQN); i++) {\n+\t\t/* generate pkt and enqueue */\n+\t\tstruct rte_event ev;\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tarp->seqn = MAGIC_SEQN[i];\n+\n+\t\tev = (struct rte_event){\n+\t\t\t.priority = PRIORITY[i],\n+\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t.queue_id = t->qid[0],\n+\t\t\t.mbuf = arp\n+\t\t};\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);\n+\t\tif (err < 0) {\n+\t\t\tprintf(\"%d: error failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\trte_event_schedule(evdev);\n+\n+\tstruct test_event_dev_stats stats;\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: error failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (stats.port_rx_pkts[t->port[0]] != 2) {\n+\t\tprintf(\"%d: error stats incorrect for directed port\\n\",\n+\t\t\t\t__LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\n+\tstruct rte_event ev, ev2;\n+\tuint32_t deq_pkts;\n+\tdeq_pkts = rte_event_dequeue_burst(evdev, t->port[0], &ev, 1, 0);\n+\tif (deq_pkts != 1) {\n+\t\tprintf(\"%d: error failed to deq\\n\", __LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\tif (ev.mbuf->seqn != MAGIC_SEQN[1]) {\n+\t\tprintf(\"%d: first packet out not highest priority\\n\",\n+\t\t\t\t__LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\trte_pktmbuf_free(ev.mbuf);\n+\n+\tdeq_pkts = rte_event_dequeue_burst(evdev, t->port[0], &ev2, 1, 0);\n+\tif (deq_pkts != 1) {\n+\t\tprintf(\"%d: error failed to deq\\n\", __LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\tif (ev2.mbuf->seqn != MAGIC_SEQN[0]) {\n+\t\tprintf(\"%d: second packet out not lower priority\\n\",\n+\t\t\t\t__LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\trte_pktmbuf_free(ev2.mbuf);\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+test_single_directed_packet(struct test *t)\n+{\n+\tconst int rx_enq = 0;\n+\tconst int wrk_enq = 2;\n+\tint err;\n+\n+\t/* Create instance with 3 directed QIDs going to 3 ports */\n+\tif (init(t, 3, 3) < 0 ||\n+\t\t\tcreate_ports(t, 3) < 0 ||\n+\t\t\tcreate_directed_qids(t, 3, t->port) < 0)\n+\t\treturn -1;\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/************** FORWARD ****************/\n+\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\tstruct rte_event ev = {\n+\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t.queue_id = wrk_enq,\n+\t\t\t.mbuf = arp,\n+\t};\n+\n+\tif (!arp) {\n+\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tconst uint32_t MAGIC_SEQN = 4711;\n+\tarp->seqn = MAGIC_SEQN;\n+\n+\t/* generate pkt and enqueue */\n+\terr = rte_event_enqueue_burst(evdev, rx_enq, &ev, 1);\n+\tif (err < 0) {\n+\t\tprintf(\"%d: error failed to enqueue\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* Run schedule() as dir packets may need to be re-ordered */\n+\trte_event_schedule(evdev);\n+\n+\tstruct test_event_dev_stats stats;\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: error failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (stats.port_rx_pkts[rx_enq] != 1) {\n+\t\tprintf(\"%d: error stats incorrect for directed port\\n\",\n+\t\t\t\t__LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tuint32_t deq_pkts;\n+\tdeq_pkts = rte_event_dequeue_burst(evdev, wrk_enq, &ev, 1, 0);\n+\tif (deq_pkts != 1) {\n+\t\tprintf(\"%d: error failed to deq\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (stats.port_rx_pkts[wrk_enq] != 0 &&\n+\t\t\tstats.port_rx_pkts[wrk_enq] != 1) {\n+\t\tprintf(\"%d: error directed stats post-dequeue\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (ev.mbuf->seqn != MAGIC_SEQN) {\n+\t\tprintf(\"%d: error magic sequence number not dequeued\\n\",\n+\t\t\t\t__LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\trte_pktmbuf_free(ev.mbuf);\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+\n+static int\n+test_priority_directed(struct test *t)\n+{\n+\tif (init(t, 1, 1) < 0 ||\n+\t\t\tcreate_ports(t, 1) < 0 ||\n+\t\t\tcreate_directed_qids(t, 1, t->port) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\treturn run_prio_packet_test(t);\n+}\n+\n+static int\n+test_priority_atomic(struct test *t)\n+{\n+\tif (init(t, 1, 1) < 0 ||\n+\t\t\tcreate_ports(t, 1) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* map the QID */\n+\tif (rte_event_port_link(evdev, t->port[0], &t->qid[0], NULL, 1) != 1) {\n+\t\tprintf(\"%d: error mapping qid to port\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\treturn run_prio_packet_test(t);\n+}\n+\n+static int\n+test_priority_ordered(struct test *t)\n+{\n+\tif (init(t, 1, 1) < 0 ||\n+\t\t\tcreate_ports(t, 1) < 0 ||\n+\t\t\tcreate_ordered_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* map the QID */\n+\tif (rte_event_port_link(evdev, t->port[0], &t->qid[0], NULL, 1) != 1) {\n+\t\tprintf(\"%d: error mapping qid to port\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\treturn run_prio_packet_test(t);\n+}\n+\n+static int\n+test_priority_unordered(struct test *t)\n+{\n+\tif (init(t, 1, 1) < 0 ||\n+\t\t\tcreate_ports(t, 1) < 0 ||\n+\t\t\tcreate_unordered_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* map the QID */\n+\tif (rte_event_port_link(evdev, t->port[0], &t->qid[0], NULL, 1) != 1) {\n+\t\tprintf(\"%d: error mapping qid to port\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\treturn run_prio_packet_test(t);\n+}\n+\n+static int\n+burst_packets(struct test *t)\n+{\n+\t/************** CONFIG ****************/\n+\tuint32_t i;\n+\tint err;\n+\tint ret;\n+\n+\t/* Create instance with 2 ports and 2 queues */\n+\tif (init(t, 2, 2) < 0 ||\n+\t\t\tcreate_ports(t, 2) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 2) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* CQ mapping to QID */\n+\tret = rte_event_port_link(evdev, t->port[0], &t->qid[0], NULL, 1);\n+\tif (ret != 1) {\n+\t\tprintf(\"%d: error mapping lb qid0\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tret = rte_event_port_link(evdev, t->port[1], &t->qid[1], NULL, 1);\n+\tif (ret != 1) {\n+\t\tprintf(\"%d: error mapping lb qid1\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/************** FORWARD ****************/\n+\tconst uint32_t rx_port = 0;\n+\tconst uint32_t NUM_PKTS = 2;\n+\n+\tfor (i = 0; i < NUM_PKTS; i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: error generating pkt\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\t\tstruct rte_event ev = {\n+\t\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t\t.queue_id = i % 2,\n+\t\t\t\t.flow_id = i % 3,\n+\t\t\t\t.mbuf = arp,\n+\t\t};\n+\t\t/* generate pkt and enqueue */\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);\n+\t\tif (err < 0) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\t/* Check stats for all NUM_PKTS arrived to sched core */\n+\tstruct test_event_dev_stats stats;\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (stats.rx_pkts != NUM_PKTS || stats.tx_pkts != NUM_PKTS) {\n+\t\tprintf(\"%d: Sched core didn't receive all %d pkts\\n\",\n+\t\t\t\t__LINE__, NUM_PKTS);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\n+\tuint32_t deq_pkts;\n+\tint p;\n+\n+\tdeq_pkts = 0;\n+\t/******** DEQ QID 1 *******/\n+\tdo {\n+\t\tstruct rte_event ev;\n+\t\tp = rte_event_dequeue_burst(evdev, t->port[0], &ev, 1, 0);\n+\t\tdeq_pkts += p;\n+\t\trte_pktmbuf_free(ev.mbuf);\n+\t} while (p);\n+\n+\tif (deq_pkts != NUM_PKTS/2) {\n+\t\tprintf(\"%d: Half of NUM_PKTS didn't arrive at port 1\\n\",\n+\t\t\t\t__LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/******** DEQ QID 2 *******/\n+\tdeq_pkts = 0;\n+\tdo {\n+\t\tstruct rte_event ev;\n+\t\tp = rte_event_dequeue_burst(evdev, t->port[1], &ev, 1, 0);\n+\t\tdeq_pkts += p;\n+\t\trte_pktmbuf_free(ev.mbuf);\n+\t} while (p);\n+\tif (deq_pkts != NUM_PKTS/2) {\n+\t\tprintf(\"%d: Half of NUM_PKTS didn't arrive at port 2\\n\",\n+\t\t\t\t__LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+abuse_inflights(struct test *t)\n+{\n+\tconst int rx_enq = 0;\n+\tconst int wrk_enq = 2;\n+\tint err;\n+\n+\t/* Create instance with 4 ports */\n+\tif (init(t, 1, 4) < 0 ||\n+\t\t\tcreate_ports(t, 4) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* CQ mapping to QID */\n+\terr = rte_event_port_link(evdev, t->port[wrk_enq], NULL, NULL, 0);\n+\tif (err != 1) {\n+\t\tprintf(\"%d: error mapping lb qid\\n\", __LINE__);\n+\t\tcleanup(t);\n+\t\treturn -1;\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* Enqueue op only */\n+\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &release_ev, 1);\n+\tif (err < 0) {\n+\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* schedule */\n+\trte_event_schedule(evdev);\n+\n+\tstruct test_event_dev_stats stats;\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (stats.rx_pkts != 0 ||\n+\t\t\tstats.tx_pkts != 0 ||\n+\t\t\tstats.port_inflight[wrk_enq] != 0) {\n+\t\tprintf(\"%d: Sched core didn't handle pkt as expected\\n\",\n+\t\t\t\t__LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+qid_priorities(struct test *t)\n+{\n+\t/* Test works by having a CQ with enough empty space for all packets,\n+\t * and enqueueing 3 packets to 3 QIDs. They must return based on the\n+\t * priority of the QID, not the ingress order, to pass the test\n+\t */\n+\tunsigned int i;\n+\t/* Create instance with 1 ports, and 3 qids */\n+\tif (init(t, 3, 1) < 0 ||\n+\t\t\tcreate_ports(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tfor (i = 0; i < 3; i++) {\n+\t\t/* Create QID */\n+\t\tconst struct rte_event_queue_conf conf = {\n+\t\t\t.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY,\n+\t\t\t/* increase priority (0 == highest), as we go */\n+\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL - i,\n+\t\t\t.nb_atomic_flows = 1024,\n+\t\t\t.nb_atomic_order_sequences = 1024,\n+\t\t};\n+\n+\t\tif (rte_event_queue_setup(evdev, i, &conf) < 0) {\n+\t\t\tprintf(\"%d: error creating qid %d\\n\", __LINE__, i);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tt->qid[i] = i;\n+\t}\n+\tt->nb_qids = i;\n+\t/* map all QIDs to port */\n+\trte_event_port_link(evdev, t->port[0], NULL, NULL, 0);\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* enqueue 3 packets, setting seqn and QID to check priority */\n+\tfor (i = 0; i < 3; i++) {\n+\t\tstruct rte_event ev;\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tev.queue_id = t->qid[i];\n+\t\tev.op = RTE_EVENT_OP_NEW;\n+\t\tev.mbuf = arp;\n+\t\tarp->seqn = i;\n+\n+\t\tint err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\trte_event_schedule(evdev);\n+\n+\t/* dequeue packets, verify priority was upheld */\n+\tstruct rte_event ev[32];\n+\tuint32_t deq_pkts =\n+\t\trte_event_dequeue_burst(evdev, t->port[0], ev, 32, 0);\n+\tif (deq_pkts != 3) {\n+\t\tprintf(\"%d: failed to deq packets\\n\", __LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\tfor (i = 0; i < 3; i++) {\n+\t\tif (ev[i].mbuf->seqn != 2-i) {\n+\t\t\tprintf(\n+\t\t\t\t\"%d: qid priority test: seqn %d incorrectly prioritized\\n\",\n+\t\t\t\t\t__LINE__, i);\n+\t\t}\n+\t}\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+load_balancing(struct test *t)\n+{\n+\tconst int rx_enq = 0;\n+\tint err;\n+\tuint32_t i;\n+\n+\tif (init(t, 1, 4) < 0 ||\n+\t\t\tcreate_ports(t, 4) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tfor (i = 0; i < 3; i++) {\n+\t\t/* map port 1 - 3 inclusive */\n+\t\tif (rte_event_port_link(evdev, t->port[i+1], &t->qid[0],\n+\t\t\t\tNULL, 1) != 1) {\n+\t\t\tprintf(\"%d: error mapping qid to port %d\\n\",\n+\t\t\t\t\t__LINE__, i);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/************** FORWARD ****************/\n+\t/*\n+\t * Create a set of flows that test the load-balancing operation of the\n+\t * implementation. Fill CQ 0 and 1 with flows 0 and 1, and test\n+\t * with a new flow, which should be sent to the 3rd mapped CQ\n+\t */\n+\tstatic uint32_t flows[] = {0, 1, 1, 0, 0, 2, 2, 0, 2};\n+\n+\tfor (i = 0; i < RTE_DIM(flows); i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\t\tstruct rte_event ev = {\n+\t\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t\t.queue_id = t->qid[0],\n+\t\t\t\t.flow_id = flows[i],\n+\t\t\t\t.mbuf = arp,\n+\t\t};\n+\t\t/* generate pkt and enqueue */\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\t\tif (err < 0) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\trte_event_schedule(evdev);\n+\n+\tstruct test_event_dev_stats stats;\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (stats.port_inflight[1] != 4) {\n+\t\tprintf(\"%d:%s: port 1 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\treturn -1;\n+\t}\n+\tif (stats.port_inflight[2] != 2) {\n+\t\tprintf(\"%d:%s: port 2 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\treturn -1;\n+\t}\n+\tif (stats.port_inflight[3] != 3) {\n+\t\tprintf(\"%d:%s: port 3 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\treturn -1;\n+\t}\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+load_balancing_history(struct test *t)\n+{\n+\tstruct test_event_dev_stats stats = {0};\n+\tconst int rx_enq = 0;\n+\tint err;\n+\tuint32_t i;\n+\n+\t/* Create instance with 1 atomic QID going to 3 ports + 1 prod port */\n+\tif (init(t, 1, 4) < 0 ||\n+\t\t\tcreate_ports(t, 4) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0)\n+\t\treturn -1;\n+\n+\t/* CQ mapping to QID */\n+\tif (rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL, 1) != 1) {\n+\t\tprintf(\"%d: error mapping port 1 qid\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_port_link(evdev, t->port[2], &t->qid[0], NULL, 1) != 1) {\n+\t\tprintf(\"%d: error mapping port 2 qid\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_port_link(evdev, t->port[3], &t->qid[0], NULL, 1) != 1) {\n+\t\tprintf(\"%d: error mapping port 3 qid\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/*\n+\t * Create a set of flows that test the load-balancing operation of the\n+\t * implementation. Fill CQ 0, 1 and 2 with flows 0, 1 and 2, drop\n+\t * the packet from CQ 0, send in a new set of flows. Ensure that:\n+\t *  1. The new flow 3 gets into the empty CQ0\n+\t *  2. packets for existing flow gets added into CQ1\n+\t *  3. Next flow 0 pkt is now onto CQ2, since CQ0 and CQ1 now contain\n+\t *     more outstanding pkts\n+\t *\n+\t *  This test makes sure that when a flow ends (i.e. all packets\n+\t *  have been completed for that flow), that the flow can be moved\n+\t *  to a different CQ when new packets come in for that flow.\n+\t */\n+\tstatic uint32_t flows1[] = {0, 1, 1, 2};\n+\n+\tfor (i = 0; i < RTE_DIM(flows1); i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tstruct rte_event ev = {\n+\t\t\t\t.flow_id = flows1[i],\n+\t\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t\t.queue_id = t->qid[0],\n+\t\t\t\t.event_type = RTE_EVENT_TYPE_CPU,\n+\t\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,\n+\t\t\t\t.mbuf = arp\n+\t\t};\n+\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tarp->hash.rss = flows1[i];\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\t\tif (err < 0) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\t/* call the scheduler */\n+\trte_event_schedule(evdev);\n+\n+\t/* Dequeue the flow 0 packet from port 1, so that we can then drop */\n+\tstruct rte_event ev;\n+\tif (!rte_event_dequeue_burst(evdev, t->port[1], &ev, 1, 0)) {\n+\t\tprintf(\"%d: failed to dequeue\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tif (ev.mbuf->hash.rss != flows1[0]) {\n+\t\tprintf(\"%d: unexpected flow received\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* drop the flow 0 packet from port 1 */\n+\trte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1);\n+\n+\t/* call the scheduler */\n+\trte_event_schedule(evdev);\n+\n+\t/*\n+\t * Set up the next set of flows, first a new flow to fill up\n+\t * CQ 0, so that the next flow 0 packet should go to CQ2\n+\t */\n+\tstatic uint32_t flows2[] = { 3, 3, 3, 1, 1, 0 };\n+\n+\tfor (i = 0; i < RTE_DIM(flows2); i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tstruct rte_event ev = {\n+\t\t\t\t.flow_id = flows2[i],\n+\t\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t\t.queue_id = t->qid[0],\n+\t\t\t\t.event_type = RTE_EVENT_TYPE_CPU,\n+\t\t\t\t.priority = RTE_EVENT_DEV_PRIORITY_NORMAL,\n+\t\t\t\t.mbuf = arp\n+\t\t};\n+\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tarp->hash.rss = flows2[i];\n+\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\t\tif (err < 0) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\t/* schedule */\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d:failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/*\n+\t * Now check the resulting inflights on each port.\n+\t */\n+\tif (stats.port_inflight[1] != 3) {\n+\t\tprintf(\"%d:%s: port 1 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\tprintf(\"Inflights, ports 1, 2, 3: %u, %u, %u\\n\",\n+\t\t\t\t(unsigned int)stats.port_inflight[1],\n+\t\t\t\t(unsigned int)stats.port_inflight[2],\n+\t\t\t\t(unsigned int)stats.port_inflight[3]);\n+\t\treturn -1;\n+\t}\n+\tif (stats.port_inflight[2] != 4) {\n+\t\tprintf(\"%d:%s: port 2 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\tprintf(\"Inflights, ports 1, 2, 3: %u, %u, %u\\n\",\n+\t\t\t\t(unsigned int)stats.port_inflight[1],\n+\t\t\t\t(unsigned int)stats.port_inflight[2],\n+\t\t\t\t(unsigned int)stats.port_inflight[3]);\n+\t\treturn -1;\n+\t}\n+\tif (stats.port_inflight[3] != 2) {\n+\t\tprintf(\"%d:%s: port 3 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\tprintf(\"Inflights, ports 1, 2, 3: %u, %u, %u\\n\",\n+\t\t\t\t(unsigned int)stats.port_inflight[1],\n+\t\t\t\t(unsigned int)stats.port_inflight[2],\n+\t\t\t\t(unsigned int)stats.port_inflight[3]);\n+\t\treturn -1;\n+\t}\n+\n+\tfor (i = 1; i <= 3; i++) {\n+\t\tstruct rte_event ev;\n+\t\twhile (rte_event_dequeue_burst(evdev, i, &ev, 1, 0))\n+\t\t\trte_event_enqueue_burst(evdev, i, &release_ev, 1);\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+invalid_qid(struct test *t)\n+{\n+\tstruct test_event_dev_stats stats;\n+\tconst int rx_enq = 0;\n+\tint err;\n+\tuint32_t i;\n+\n+\tif (init(t, 1, 4) < 0 ||\n+\t\t\tcreate_ports(t, 4) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* CQ mapping to QID */\n+\tfor (i = 0; i < 4; i++) {\n+\t\terr = rte_event_port_link(evdev, t->port[i], &t->qid[0],\n+\t\t\t\tNULL, 1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: error mapping port 1 qid\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/*\n+\t * Send in a packet with an invalid qid to the scheduler.\n+\t * We should see the packed enqueued OK, but the inflights for\n+\t * that packet should not be incremented, and the rx_dropped\n+\t * should be incremented.\n+\t */\n+\tstatic uint32_t flows1[] = {20};\n+\n+\tfor (i = 0; i < RTE_DIM(flows1); i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\t\tstruct rte_event ev = {\n+\t\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t\t.queue_id = t->qid[0] + flows1[i],\n+\t\t\t\t.flow_id = i,\n+\t\t\t\t.mbuf = arp,\n+\t\t};\n+\t\t/* generate pkt and enqueue */\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\t\tif (err < 0) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\t/* call the scheduler */\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/*\n+\t * Now check the resulting inflights on the port, and the rx_dropped.\n+\t */\n+\tif (stats.port_inflight[0] != 0) {\n+\t\tprintf(\"%d:%s: port 1 inflight count not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\tif (stats.port_rx_dropped[0] != 1) {\n+\t\tprintf(\"%d:%s: port 1 drops\\n\", __LINE__, __func__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\t/* each packet drop should only be counted in one place - port or dev */\n+\tif (stats.rx_dropped != 0) {\n+\t\tprintf(\"%d:%s: port 1 dropped count not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+single_packet(struct test *t)\n+{\n+\tconst uint32_t MAGIC_SEQN = 7321;\n+\tstruct rte_event ev;\n+\tstruct test_event_dev_stats stats;\n+\tconst int rx_enq = 0;\n+\tconst int wrk_enq = 2;\n+\tint err;\n+\n+\t/* Create instance with 4 ports */\n+\tif (init(t, 1, 4) < 0 ||\n+\t\t\tcreate_ports(t, 4) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* CQ mapping to QID */\n+\terr = rte_event_port_link(evdev, t->port[wrk_enq], NULL, NULL, 0);\n+\tif (err != 1) {\n+\t\tprintf(\"%d: error mapping lb qid\\n\", __LINE__);\n+\t\tcleanup(t);\n+\t\treturn -1;\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/************** Gen pkt and enqueue ****************/\n+\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\tif (!arp) {\n+\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tev.op = RTE_EVENT_OP_NEW;\n+\tev.priority = RTE_EVENT_DEV_PRIORITY_NORMAL;\n+\tev.mbuf = arp;\n+\tev.queue_id = 0;\n+\tev.flow_id = 3;\n+\tarp->seqn = MAGIC_SEQN;\n+\n+\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\tif (err < 0) {\n+\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (stats.rx_pkts != 1 ||\n+\t\t\tstats.tx_pkts != 1 ||\n+\t\t\tstats.port_inflight[wrk_enq] != 1) {\n+\t\tprintf(\"%d: Sched core didn't handle pkt as expected\\n\",\n+\t\t\t\t__LINE__);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn -1;\n+\t}\n+\n+\tuint32_t deq_pkts;\n+\n+\tdeq_pkts = rte_event_dequeue_burst(evdev, t->port[wrk_enq], &ev, 1, 0);\n+\tif (deq_pkts < 1) {\n+\t\tprintf(\"%d: Failed to deq\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (ev.mbuf->seqn != MAGIC_SEQN) {\n+\t\tprintf(\"%d: magic sequence number not dequeued\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\trte_pktmbuf_free(ev.mbuf);\n+\terr = rte_event_enqueue_burst(evdev, t->port[wrk_enq], &release_ev, 1);\n+\tif (err < 0) {\n+\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (stats.port_inflight[wrk_enq] != 0) {\n+\t\tprintf(\"%d: port inflight not correct\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+inflight_counts(struct test *t)\n+{\n+\tstruct rte_event ev;\n+\tstruct test_event_dev_stats stats;\n+\tconst int rx_enq = 0;\n+\tconst int p1 = 1;\n+\tconst int p2 = 2;\n+\tint err;\n+\tint i;\n+\n+\t/* Create instance with 4 ports */\n+\tif (init(t, 2, 3) < 0 ||\n+\t\t\tcreate_ports(t, 3) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 2) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* CQ mapping to QID */\n+\terr = rte_event_port_link(evdev, t->port[p1], &t->qid[0], NULL, 1);\n+\tif (err != 1) {\n+\t\tprintf(\"%d: error mapping lb qid\\n\", __LINE__);\n+\t\tcleanup(t);\n+\t\treturn -1;\n+\t}\n+\terr = rte_event_port_link(evdev, t->port[p2], &t->qid[1], NULL, 1);\n+\tif (err != 1) {\n+\t\tprintf(\"%d: error mapping lb qid\\n\", __LINE__);\n+\t\tcleanup(t);\n+\t\treturn -1;\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/************** FORWARD ****************/\n+#define QID1_NUM 5\n+\tfor (i = 0; i < QID1_NUM; i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\tgoto err;\n+\t\t}\n+\n+\t\tev.queue_id =  t->qid[0];\n+\t\tev.op = RTE_EVENT_OP_NEW;\n+\t\tev.mbuf = arp;\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\tgoto err;\n+\t\t}\n+\t}\n+#define QID2_NUM 3\n+\tfor (i = 0; i < QID2_NUM; i++) {\n+\t\tstruct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);\n+\n+\t\tif (!arp) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\tgoto err;\n+\t\t}\n+\t\tev.queue_id =  t->qid[1];\n+\t\tev.op = RTE_EVENT_OP_NEW;\n+\t\tev.mbuf = arp;\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\tgoto err;\n+\t\t}\n+\t}\n+\n+\t/* schedule */\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (err) {\n+\t\tprintf(\"%d: failed to get stats\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\tif (stats.rx_pkts != QID1_NUM + QID2_NUM ||\n+\t\t\tstats.tx_pkts != QID1_NUM + QID2_NUM) {\n+\t\tprintf(\"%d: Sched core didn't handle pkt as expected\\n\",\n+\t\t\t\t__LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\tif (stats.port_inflight[p1] != QID1_NUM) {\n+\t\tprintf(\"%d: %s port 1 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\tgoto err;\n+\t}\n+\tif (stats.port_inflight[p2] != QID2_NUM) {\n+\t\tprintf(\"%d: %s port 2 inflight not correct\\n\", __LINE__,\n+\t\t\t\t__func__);\n+\t\tgoto err;\n+\t}\n+\n+\t/************** DEQUEUE INFLIGHT COUNT CHECKS  ****************/\n+\t/* port 1 */\n+\tstruct rte_event events[QID1_NUM + QID2_NUM];\n+\tuint32_t deq_pkts = rte_event_dequeue_burst(evdev, t->port[p1], events,\n+\t\t\tRTE_DIM(events), 0);\n+\n+\tif (deq_pkts != QID1_NUM) {\n+\t\tprintf(\"%d: Port 1: DEQUEUE inflight failed\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (stats.port_inflight[p1] != QID1_NUM) {\n+\t\tprintf(\"%d: port 1 inflight decrement after DEQ != 0\\n\",\n+\t\t\t\t__LINE__);\n+\t\tgoto err;\n+\t}\n+\tfor (i = 0; i < QID1_NUM; i++) {\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[p1], &release_ev,\n+\t\t\t\t1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: %s rte enqueue of inf release failed\\n\",\n+\t\t\t\t__LINE__, __func__);\n+\t\t\tgoto err;\n+\t\t}\n+\t}\n+\n+\t/*\n+\t * As the scheduler core decrements inflights, it needs to run to\n+\t * process packets to act on the drop messages\n+\t */\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (stats.port_inflight[p1] != 0) {\n+\t\tprintf(\"%d: port 1 inflight NON NULL after DROP\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\t/* port2 */\n+\tdeq_pkts = rte_event_dequeue_burst(evdev, t->port[p2], events,\n+\t\t\tRTE_DIM(events), 0);\n+\tif (deq_pkts != QID2_NUM) {\n+\t\tprintf(\"%d: Port 2: DEQUEUE inflight failed\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (stats.port_inflight[p2] != QID2_NUM) {\n+\t\tprintf(\"%d: port 1 inflight decrement after DEQ != 0\\n\",\n+\t\t\t\t__LINE__);\n+\t\tgoto err;\n+\t}\n+\tfor (i = 0; i < QID2_NUM; i++) {\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[p2], &release_ev,\n+\t\t\t\t1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: %s rte enqueue of inf release failed\\n\",\n+\t\t\t\t__LINE__, __func__);\n+\t\t\tgoto err;\n+\t\t}\n+\t}\n+\n+\t/*\n+\t * As the scheduler core decrements inflights, it needs to run to\n+\t * process packets to act on the drop messages\n+\t */\n+\trte_event_schedule(evdev);\n+\n+\terr = test_event_dev_stats_get(evdev, &stats);\n+\tif (stats.port_inflight[p2] != 0) {\n+\t\tprintf(\"%d: port 2 inflight NON NULL after DROP\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\tcleanup(t);\n+\treturn 0;\n+\n+err:\n+\trte_event_dev_dump(evdev, stdout);\n+\tcleanup(t);\n+\treturn -1;\n+}\n+\n+static int\n+parallel_basic(struct test *t, int check_order)\n+{\n+\tconst uint8_t rx_port = 0;\n+\tconst uint8_t w1_port = 1;\n+\tconst uint8_t w3_port = 3;\n+\tconst uint8_t tx_port = 4;\n+\tint err;\n+\tint i;\n+\tuint32_t deq_pkts, j;\n+\tstruct rte_mbuf *mbufs[3];\n+\tstruct rte_mbuf *mbufs_out[3];\n+\tconst uint32_t MAGIC_SEQN = 1234;\n+\n+\t/* Create instance with 4 ports */\n+\tif (init(t, 2, tx_port + 1) < 0 ||\n+\t\t\tcreate_ports(t, tx_port + 1) < 0 ||\n+\t\t\t(check_order ?  create_ordered_qids(t, 1) :\n+\t\t\t\tcreate_unordered_qids(t, 1)) < 0 ||\n+\t\t\tcreate_directed_qids(t, 1, &tx_port)) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/*\n+\t * CQ mapping to QID\n+\t * We need three ports, all mapped to the same ordered qid0. Then we'll\n+\t * take a packet out to each port, re-enqueue in reverse order,\n+\t * then make sure the reordering has taken place properly when we\n+\t * dequeue from the tx_port.\n+\t *\n+\t * Simplified test setup diagram:\n+\t *\n+\t * rx_port        w1_port\n+\t *        \\     /         \\\n+\t *         qid0 - w2_port - qid1\n+\t *              \\         /     \\\n+\t *                w3_port        tx_port\n+\t */\n+\t/* CQ mapping to QID for LB ports (directed mapped on create) */\n+\tfor (i = w1_port; i <= w3_port; i++) {\n+\t\terr = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,\n+\t\t\t\t1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: error mapping lb qid\\n\", __LINE__);\n+\t\t\tcleanup(t);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* Enqueue 3 packets to the rx port */\n+\tfor (i = 0; i < 3; i++) {\n+\t\tstruct rte_event ev;\n+\t\tmbufs[i] = rte_gen_arp(0, t->mbuf_pool);\n+\t\tif (!mbufs[i]) {\n+\t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\n+\t\tev.queue_id = t->qid[0];\n+\t\tev.op = RTE_EVENT_OP_NEW;\n+\t\tev.mbuf = mbufs[i];\n+\t\tmbufs[i]->seqn = MAGIC_SEQN + i;\n+\n+\t\t/* generate pkt and enqueue */\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: Failed to enqueue pkt %u, retval = %u\\n\",\n+\t\t\t\t\t__LINE__, i, err);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\trte_event_schedule(evdev);\n+\n+\t/* use extra slot to make logic in loops easier */\n+\tstruct rte_event deq_ev[w3_port + 1];\n+\n+\t/* Dequeue the 3 packets, one from each worker port */\n+\tfor (i = w1_port; i <= w3_port; i++) {\n+\t\tdeq_pkts = rte_event_dequeue_burst(evdev, t->port[i],\n+\t\t\t\t&deq_ev[i], 1, 0);\n+\t\tif (deq_pkts != 1) {\n+\t\t\tprintf(\"%d: Failed to deq\\n\", __LINE__);\n+\t\t\trte_event_dev_dump(evdev, stdout);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\t/* Enqueue each packet in reverse order, flushing after each one */\n+\tfor (i = w3_port; i >= w1_port; i--) {\n+\n+\t\tdeq_ev[i].op = RTE_EVENT_OP_FORWARD;\n+\t\tdeq_ev[i].queue_id = t->qid[1];\n+\t\terr = rte_event_enqueue_burst(evdev, t->port[i], &deq_ev[i], 1);\n+\t\tif (err != 1) {\n+\t\t\tprintf(\"%d: Failed to enqueue\\n\", __LINE__);\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\t/* dequeue from the tx ports, we should get 3 packets */\n+\tdeq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev,\n+\t\t\t3, 0);\n+\n+\t/* Check to see if we've got all 3 packets */\n+\tif (deq_pkts != 3) {\n+\t\tprintf(\"%d: expected 3 packets at tx port got %d from port %d\\n\",\n+\t\t\t__LINE__, deq_pkts, tx_port);\n+\t\trte_event_dev_dump(evdev, stdout);\n+\t\treturn 1;\n+\t}\n+\n+\t/* Check to see if the sequence numbers are in expected order */\n+\tif (check_order) {\n+\t\tfor (j = 0 ; j < deq_pkts ; j++) {\n+\t\t\tif (deq_ev[j].mbuf->seqn != MAGIC_SEQN + j) {\n+\t\t\t\tprintf(\n+\t\t\t\t\t\"%d: Incorrect sequence number(%d) from port %d\\n\",\n+\t\t\t\t\t__LINE__, mbufs_out[j]->seqn, tx_port);\n+\t\t\t\treturn -1;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\t/* Destroy the instance */\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static int\n+ordered_basic(struct test *t)\n+{\n+\treturn parallel_basic(t, 1);\n+}\n+\n+static int\n+unordered_basic(struct test *t)\n+{\n+\treturn parallel_basic(t, 0);\n+}\n+\n+static int\n+holb(struct test *t) /* test to check we avoid basic head-of-line blocking */\n+{\n+\tconst struct rte_event new_ev = {\n+\t\t\t.op = RTE_EVENT_OP_NEW\n+\t\t\t/* all other fields zero */\n+\t};\n+\tstruct rte_event ev = new_ev;\n+\tunsigned int rx_port = 0; /* port we get the first flow on */\n+\tchar rx_port_used_stat[64];\n+\tchar rx_port_free_stat[64];\n+\tchar other_port_used_stat[64];\n+\n+\tif (init(t, 1, 2) < 0 ||\n+\t\t\tcreate_ports(t, 2) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 1) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\tint nb_links = rte_event_port_link(evdev, t->port[1], NULL, NULL, 0);\n+\tif (rte_event_port_link(evdev, t->port[0], NULL, NULL, 0) != 1 ||\n+\t\t\tnb_links != 1) {\n+\t\tprintf(\"%d: Error links queue to ports\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\t/* send one packet and see where it goes, port 0 or 1 */\n+\tif (rte_event_enqueue_burst(evdev, t->port[0], &ev, 1) != 1) {\n+\t\tprintf(\"%d: Error doing first enqueue\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\tif (rte_event_dev_xstats_by_name_get(evdev, \"port_0_cq_ring_used\", NULL) != 1)\n+\t\trx_port = 1;\n+\tsnprintf(rx_port_used_stat, sizeof(rx_port_used_stat),\n+\t\t\t\"port_%u_cq_ring_used\", rx_port);\n+\tsnprintf(rx_port_free_stat, sizeof(rx_port_free_stat),\n+\t\t\t\"port_%u_cq_ring_free\", rx_port);\n+\tsnprintf(other_port_used_stat, sizeof(other_port_used_stat),\n+\t\t\t\"port_%u_cq_ring_used\", rx_port ^ 1);\n+\tif (rte_event_dev_xstats_by_name_get(evdev, rx_port_used_stat, NULL) != 1) {\n+\t\tprintf(\"%d: Error, first event not scheduled\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\t/* now fill up the rx port's queue with one flow to cause HOLB */\n+\tdo {\n+\t\tev = new_ev;\n+\t\tif (rte_event_enqueue_burst(evdev, t->port[0], &ev, 1) != 1) {\n+\t\t\tprintf(\"%d: Error with enqueue\\n\", __LINE__);\n+\t\t\tgoto err;\n+\t\t}\n+\t\trte_event_schedule(evdev);\n+\t} while (rte_event_dev_xstats_by_name_get(evdev, rx_port_free_stat, NULL) != 0);\n+\n+\t/* one more packet, which needs to stay in IQ - i.e. HOLB */\n+\tev = new_ev;\n+\tif (rte_event_enqueue_burst(evdev, t->port[0], &ev, 1) != 1) {\n+\t\tprintf(\"%d: Error with enqueue\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\t/* check that the other port still has an empty CQ */\n+\tif (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) != 0) {\n+\t\tprintf(\"%d: Error, second port CQ is not empty\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\t/* check IQ now has one packet */\n+\tif (rte_event_dev_xstats_by_name_get(evdev, \"qid_0_iq_0_used\", NULL) != 1) {\n+\t\tprintf(\"%d: Error, QID does not have exactly 1 packet\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\t/* send another flow, which should pass the other IQ entry */\n+\tev = new_ev;\n+\tev.flow_id = 1;\n+\tif (rte_event_enqueue_burst(evdev, t->port[0], &ev, 1) != 1) {\n+\t\tprintf(\"%d: Error with enqueue\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\trte_event_schedule(evdev);\n+\n+\tif (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL) != 1) {\n+\t\tprintf(\"%d: Error, second flow did not pass out first\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\n+\tif (rte_event_dev_xstats_by_name_get(evdev, \"qid_0_iq_0_used\", NULL) != 1) {\n+\t\tprintf(\"%d: Error, QID does not have exactly 1 packet\\n\", __LINE__);\n+\t\tgoto err;\n+\t}\n+\tcleanup(t);\n+\treturn 0;\n+err:\n+\trte_event_dev_dump(evdev, stdout);\n+\tcleanup(t);\n+\treturn -1;\n+}\n+\n+static int\n+worker_loopback_worker_fn(void *arg)\n+{\n+\tstruct test *t = arg;\n+\tuint8_t port = t->port[1];\n+\tint count = 0;\n+\tint enqd;\n+\n+\t/*\n+\t * Takes packets from the input port and then loops them back through\n+\t * the Queue Manager. Each packet gets looped through QIDs 0-8, 16\n+\t * times so each packet goes through 8*16 = 128 times.\n+\t */\n+\tprintf(\"%d: \\tWorker function started\\n\", __LINE__);\n+\twhile (count < NUM_PACKETS) {\n+#define BURST_SIZE 32\n+\t\tstruct rte_event ev[BURST_SIZE];\n+\t\tuint16_t i, nb_rx = rte_event_dequeue_burst(evdev, port, ev,\n+\t\t\t\tBURST_SIZE, 0);\n+\t\tif (nb_rx == 0) {\n+\t\t\trte_pause();\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tfor (i = 0; i < nb_rx; i++) {\n+\t\t\tev[i].queue_id++;\n+\t\t\tif (ev[i].queue_id != 8) {\n+\t\t\t\tev[i].op = RTE_EVENT_OP_FORWARD;\n+\t\t\t\tenqd = rte_event_enqueue_burst(evdev, port,\n+\t\t\t\t\t\t&ev[i], 1);\n+\t\t\t\tif (enqd != 1) {\n+\t\t\t\t\tprintf(\"%d: Can't enqueue FWD!!\\n\",\n+\t\t\t\t\t\t\t__LINE__);\n+\t\t\t\t\treturn -1;\n+\t\t\t\t}\n+\t\t\t\tcontinue;\n+\t\t\t}\n+\n+\t\t\tev[i].queue_id = 0;\n+\t\t\tev[i].mbuf->udata64++;\n+\t\t\tif (ev[i].mbuf->udata64 != 16) {\n+\t\t\t\tev[i].op = RTE_EVENT_OP_FORWARD;\n+\t\t\t\tenqd = rte_event_enqueue_burst(evdev, port,\n+\t\t\t\t\t\t&ev[i], 1);\n+\t\t\t\tif (enqd != 1) {\n+\t\t\t\t\tprintf(\"%d: Can't enqueue FWD!!\\n\",\n+\t\t\t\t\t\t\t__LINE__);\n+\t\t\t\t\treturn -1;\n+\t\t\t\t}\n+\t\t\t\tcontinue;\n+\t\t\t}\n+\t\t\t/* we have hit 16 iterations through system - drop */\n+\t\t\trte_pktmbuf_free(ev[i].mbuf);\n+\t\t\tcount++;\n+\t\t\tev[i].op = RTE_EVENT_OP_RELEASE;\n+\t\t\tenqd = rte_event_enqueue_burst(evdev, port, &ev[i], 1);\n+\t\t\tif (enqd != 1) {\n+\t\t\t\tprintf(\"%d drop enqueue failed\\n\", __LINE__);\n+\t\t\t\treturn -1;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+worker_loopback_producer_fn(void *arg)\n+{\n+\tstruct test *t = arg;\n+\tuint8_t port = t->port[0];\n+\tuint64_t count = 0;\n+\n+\tprintf(\"%d: \\tProducer function started\\n\", __LINE__);\n+\twhile (count < NUM_PACKETS) {\n+\t\tstruct rte_mbuf *m = 0;\n+\t\tdo {\n+\t\t\tm = rte_pktmbuf_alloc(t->mbuf_pool);\n+\t\t} while (m == NULL);\n+\n+\t\tm->udata64 = 0;\n+\n+\t\tstruct rte_event ev = {\n+\t\t\t\t.op = RTE_EVENT_OP_NEW,\n+\t\t\t\t.queue_id = t->qid[0],\n+\t\t\t\t.flow_id = (uintptr_t)m & 0xFFFF,\n+\t\t\t\t.mbuf = m,\n+\t\t};\n+\n+\t\tif (rte_event_enqueue_burst(evdev, port, &ev, 1) != 1) {\n+\t\t\twhile (rte_event_enqueue_burst(evdev, port, &ev, 1) !=\n+\t\t\t\t\t1)\n+\t\t\t\trte_pause();\n+\t\t}\n+\n+\t\tcount++;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int\n+worker_loopback(struct test *t)\n+{\n+\t/* use a single producer core, and a worker core to see what happens\n+\t * if the worker loops packets back multiple times\n+\t */\n+\tstruct test_event_dev_stats stats;\n+\tuint64_t print_cycles = 0, cycles = 0;\n+\tuint64_t tx_pkts = 0;\n+\tint err;\n+\tint w_lcore, p_lcore;\n+\n+\tif (init(t, 8, 2) < 0 ||\n+\t\t\tcreate_atomic_qids(t, 8) < 0) {\n+\t\tprintf(\"%d: Error initializing device\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\t/* RX with low max events */\n+\tstatic struct rte_event_port_conf conf = {\n+\t\t\t.new_event_threshold = 512,\n+\t\t\t.dequeue_depth = 32,\n+\t\t\t.enqueue_depth = 64,\n+\t};\n+\tif (rte_event_port_setup(evdev, 0, &conf) < 0) {\n+\t\tprintf(\"Error setting up RX port\\n\");\n+\t\treturn -1;\n+\t}\n+\tt->port[0] = 0;\n+\t/* TX with higher max events */\n+\tconf.new_event_threshold = 4096;\n+\tif (rte_event_port_setup(evdev, 1, &conf) < 0) {\n+\t\tprintf(\"Error setting up TX port\\n\");\n+\t\treturn -1;\n+\t}\n+\tt->port[1] = 1;\n+\n+\t/* CQ mapping to QID */\n+\terr = rte_event_port_link(evdev, t->port[1], NULL, NULL, 0);\n+\tif (err != 8) { /* should have mapped all queues*/\n+\t\tprintf(\"%d: error mapping port 2 to all qids\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tif (rte_event_dev_start(evdev) < 0) {\n+\t\tprintf(\"%d: Error with start call\\n\", __LINE__);\n+\t\treturn -1;\n+\t}\n+\n+\tp_lcore = rte_get_next_lcore(\n+\t\t\t/* start core */ -1,\n+\t\t\t/* skip master */ 1,\n+\t\t\t/* wrap */ 0);\n+\tw_lcore = rte_get_next_lcore(p_lcore, 1, 0);\n+\n+\trte_eal_remote_launch(worker_loopback_producer_fn, t, p_lcore);\n+\trte_eal_remote_launch(worker_loopback_worker_fn, t, w_lcore);\n+\n+\tprint_cycles = cycles = rte_get_timer_cycles();\n+\twhile (rte_eal_get_lcore_state(p_lcore) != FINISHED ||\n+\t\t\trte_eal_get_lcore_state(w_lcore) != FINISHED) {\n+\n+\t\trte_event_schedule(evdev);\n+\n+\t\tuint64_t new_cycles = rte_get_timer_cycles();\n+\n+\t\tif (new_cycles - print_cycles > rte_get_timer_hz()) {\n+\t\t\ttest_event_dev_stats_get(evdev, &stats);\n+\t\t\tprintf(\n+\t\t\t\t\"%d: \\tSched Rx = %\" PRIu64 \", Tx = %\" PRIu64 \"\\n\",\n+\t\t\t\t__LINE__, stats.rx_pkts, stats.tx_pkts);\n+\n+\t\t\tprint_cycles = new_cycles;\n+\t\t}\n+\t\tif (new_cycles - cycles > rte_get_timer_hz() * 3) {\n+\t\t\ttest_event_dev_stats_get(evdev, &stats);\n+\t\t\tif (stats.tx_pkts == tx_pkts) {\n+\t\t\t\trte_event_dev_dump(evdev, stdout);\n+\t\t\t\tprintf(\n+\t\t\t\t\t\"%d: No schedules for seconds, deadlock\\n\",\n+\t\t\t\t\t__LINE__);\n+\t\t\t\treturn -1;\n+\t\t\t}\n+\t\t\ttx_pkts = stats.tx_pkts;\n+\t\t\tcycles = new_cycles;\n+\t\t}\n+\t}\n+\trte_event_schedule(evdev); /* ensure all completions are flushed */\n+\n+\trte_eal_mp_wait_lcore();\n+\n+\tcleanup(t);\n+\treturn 0;\n+}\n+\n+static struct rte_mempool *eventdev_func_mempool;\n+\n+static int\n+test_sw_eventdev(void)\n+{\n+\tstruct test *t = malloc(sizeof(struct test));\n+\tint ret;\n+\n+\tconst char *eventdev_name = \"event_sw0\";\n+\tevdev = rte_event_dev_get_dev_id(eventdev_name);\n+\tif (evdev < 0) {\n+\t\tprintf(\"%d: Eventdev %s not found - creating.\\n\",\n+\t\t\t\t__LINE__, eventdev_name);\n+\t\tif (rte_eal_vdev_init(eventdev_name, NULL) < 0) {\n+\t\t\tprintf(\"Error creating eventdev\\n\");\n+\t\t\treturn -1;\n+\t\t}\n+\t\tevdev = rte_event_dev_get_dev_id(eventdev_name);\n+\t\tif (evdev < 0) {\n+\t\t\tprintf(\"Error finding newly created eventdev\\n\");\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\n+\t/* Only create mbuf pool once, reuse for each test run */\n+\tif (!eventdev_func_mempool) {\n+\t\teventdev_func_mempool = rte_pktmbuf_pool_create(\n+\t\t\t\t\"EVENTDEV_SW_SA_MBUF_POOL\",\n+\t\t\t\t(1<<12), /* 4k buffers */\n+\t\t\t\t32 /*MBUF_CACHE_SIZE*/,\n+\t\t\t\t0,\n+\t\t\t\t512, /* use very small mbufs */\n+\t\t\t\trte_socket_id());\n+\t\tif (!eventdev_func_mempool) {\n+\t\t\tprintf(\"ERROR creating mempool\\n\");\n+\t\t\treturn -1;\n+\t\t}\n+\t}\n+\tt->mbuf_pool = eventdev_func_mempool;\n+\n+\tprintf(\"*** Running Single Directed Packet test...\\n\");\n+\tret = test_single_directed_packet(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Single Directed Packet test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Single Load Balanced Packet test...\\n\");\n+\tret = single_packet(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Single Packet test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Unordered Basic test...\\n\");\n+\tret = unordered_basic(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR -  Unordered Basic test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Ordered Basic test...\\n\");\n+\tret = ordered_basic(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR -  Ordered Basic test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Burst Packets test...\\n\");\n+\tret = burst_packets(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Burst Packets test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Load Balancing test...\\n\");\n+\tret = load_balancing(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Load Balancing test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Prioritized Directed test...\\n\");\n+\tret = test_priority_directed(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Prioritized Directed test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Prioritized Atomic test...\\n\");\n+\tret = test_priority_atomic(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Prioritized Atomic test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\n+\tprintf(\"*** Running Prioritized Ordered test...\\n\");\n+\tret = test_priority_ordered(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Prioritized Ordered test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Prioritized Unordered test...\\n\");\n+\tret = test_priority_unordered(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Prioritized Unordered test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Invalid QID test...\\n\");\n+\tret = invalid_qid(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Invalid QID test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Load Balancing History test...\\n\");\n+\tret = load_balancing_history(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Load Balancing History test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Inflight Count test...\\n\");\n+\tret = inflight_counts(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Inflight Count test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Abuse Inflights test...\\n\");\n+\tret = abuse_inflights(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Abuse Inflights test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running QID Priority test...\\n\");\n+\tret = qid_priorities(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - QID Priority test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tprintf(\"*** Running Head-of-line-blocking test...\\n\");\n+\tret = holb(t);\n+\tif (ret != 0) {\n+\t\tprintf(\"ERROR - Head-of-line-blocking test FAILED.\\n\");\n+\t\treturn ret;\n+\t}\n+\tif (rte_lcore_count() >= 3) {\n+\t\tprintf(\"*** Running Worker loopback test...\\n\");\n+\t\tret = worker_loopback(t);\n+\t\tif (ret != 0) {\n+\t\t\tprintf(\"ERROR - Worker loopback test FAILED.\\n\");\n+\t\t\treturn ret;\n+\t\t}\n+\t} else {\n+\t\tprintf(\"### Not enough cores for worker loopback test.\\n\");\n+\t\tprintf(\"### Need at least 3 cores for test.\\n\");\n+\t}\n+\t/*\n+\t * Free test instance, leaving mempool initialized, and a pointer to it\n+\t * in static eventdev_func_mempool, as it is re-used on re-runs\n+\t */\n+\tfree(t);\n+\n+\treturn 0;\n+}\n+\n+REGISTER_TEST_COMMAND(eventdev_sw_autotest, test_sw_eventdev);\n",
    "prefixes": [
        "dpdk-dev",
        "v2",
        "15/15"
    ]
}