get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/82389/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 82389,
    "url": "http://patches.dpdk.org/api/patches/82389/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20201027221343.28551-8-david.marchand@redhat.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20201027221343.28551-8-david.marchand@redhat.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20201027221343.28551-8-david.marchand@redhat.com",
    "date": "2020-10-27T22:13:42",
    "name": "[7/8] event: switch sequence number to dynamic field",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "fc42d9794140f4d8a642ae248c80656b9f0a21b0",
    "submitter": {
        "id": 1173,
        "url": "http://patches.dpdk.org/api/people/1173/?format=api",
        "name": "David Marchand",
        "email": "david.marchand@redhat.com"
    },
    "delegate": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/users/1/?format=api",
        "username": "tmonjalo",
        "first_name": "Thomas",
        "last_name": "Monjalon",
        "email": "thomas@monjalon.net"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20201027221343.28551-8-david.marchand@redhat.com/mbox/",
    "series": [
        {
            "id": 13394,
            "url": "http://patches.dpdk.org/api/series/13394/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=13394",
            "date": "2020-10-27T22:13:35",
            "name": "remove mbuf seqn",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/13394/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/82389/comments/",
    "check": "success",
    "checks": "http://patches.dpdk.org/api/patches/82389/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 09311A04B5;\n\tTue, 27 Oct 2020 23:16:36 +0100 (CET)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id A3EF54C95;\n\tTue, 27 Oct 2020 23:14:20 +0100 (CET)",
            "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [216.205.24.124])\n by dpdk.org (Postfix) with ESMTP id C552A37B0\n for <dev@dpdk.org>; Tue, 27 Oct 2020 23:14:13 +0100 (CET)",
            "from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com\n [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id\n us-mta-237-K5mAId8XN_-JpbCPraaNAw-1; Tue, 27 Oct 2020 18:14:08 -0400",
            "from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com\n [10.5.11.16])\n (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n (No client certificate requested)\n by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 431AFEC503;\n Tue, 27 Oct 2020 22:14:06 +0000 (UTC)",
            "from dmarchan.remote.csb (unknown [10.40.192.40])\n by smtp.corp.redhat.com (Postfix) with ESMTP id 02C125C1BB;\n Tue, 27 Oct 2020 22:14:03 +0000 (UTC)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1603836852;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=jmrp3dCZhm4xvGy6EMUCxphCfu1+JtvonLD1Vcn/n/w=;\n b=dc+94NW4BkIT1J9RXyRy2Fn3QtnxbVSPR/53ls9O57twlCpBgkWlqgvnD/PTofE8djX/41\n JFw2njDHKdxXDTJW3xQXr3OOPzgjvx6TTLKuM7KB/KK8fgGtxTSxTAF3YlNRphZBrkkuBJ\n oxU+90hK16FlIiTcMY8WdOOZCyOXkG8=",
        "X-MC-Unique": "K5mAId8XN_-JpbCPraaNAw-1",
        "From": "David Marchand <david.marchand@redhat.com>",
        "To": "dev@dpdk.org",
        "Cc": "Jerin Jacob <jerinj@marvell.com>,\n Pavan Nikhilesh <pbhagavatula@marvell.com>,\n Liang Ma <liang.j.ma@intel.com>, Peter Mccarthy <peter.mccarthy@intel.com>,\n Harry van Haaren <harry.van.haaren@intel.com>,\n Ray Kinsella <mdr@ashroe.eu>, Neil Horman <nhorman@tuxdriver.com>",
        "Date": "Tue, 27 Oct 2020 23:13:42 +0100",
        "Message-Id": "<20201027221343.28551-8-david.marchand@redhat.com>",
        "In-Reply-To": "<20201027221343.28551-1-david.marchand@redhat.com>",
        "References": "<20201027221343.28551-1-david.marchand@redhat.com>",
        "MIME-Version": "1.0",
        "X-Scanned-By": "MIMEDefang 2.79 on 10.5.11.16",
        "Authentication-Results": "relay.mimecast.com;\n auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com",
        "X-Mimecast-Spam-Score": "0",
        "X-Mimecast-Originator": "redhat.com",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dpdk-dev] [PATCH 7/8] event: switch sequence number to dynamic\n\tfield",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "The eventdev drivers have been hacking the deprecated field seqn for\ninternal test usage.\nIt is moved to a dynamic mbuf field in order to allow removal of seqn.\n\nSigned-off-by: David Marchand <david.marchand@redhat.com>\n---\n app/test-eventdev/evt_main.c                  |  3 ++\n app/test-eventdev/test_order_common.c         |  2 +-\n app/test-eventdev/test_order_common.h         |  5 ++-\n drivers/event/octeontx/ssovf_evdev_selftest.c | 32 ++++++++--------\n drivers/event/octeontx2/otx2_evdev_selftest.c | 31 +++++++--------\n drivers/event/opdl/opdl_test.c                |  8 ++--\n drivers/event/sw/sw_evdev_selftest.c          | 34 +++++++++--------\n lib/librte_eventdev/rte_eventdev.c            | 21 +++++++++-\n lib/librte_eventdev/rte_eventdev.h            | 38 ++++++++++++++++---\n lib/librte_eventdev/version.map               |  2 +\n 10 files changed, 116 insertions(+), 60 deletions(-)",
    "diff": "diff --git a/app/test-eventdev/evt_main.c b/app/test-eventdev/evt_main.c\nindex a8d304bab3..832bb21d7c 100644\n--- a/app/test-eventdev/evt_main.c\n+++ b/app/test-eventdev/evt_main.c\n@@ -89,6 +89,9 @@ main(int argc, char **argv)\n \tif (!evdevs)\n \t\trte_panic(\"no eventdev devices found\\n\");\n \n+\tif (rte_event_test_seqn_dynfield_register() < 0)\n+\t\trte_panic(\"failed to register event dev sequence number\\n\");\n+\n \t/* Populate the default values of the options */\n \tevt_options_default(&opt);\n \ndiff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c\nindex c5f7317440..d15ff80273 100644\n--- a/app/test-eventdev/test_order_common.c\n+++ b/app/test-eventdev/test_order_common.c\n@@ -50,7 +50,7 @@ order_producer(void *arg)\n \n \t\tconst flow_id_t flow = (uintptr_t)m % nb_flows;\n \t\t/* Maintain seq number per flow */\n-\t\tm->seqn = producer_flow_seq[flow]++;\n+\t\t*rte_event_test_seqn(m) = producer_flow_seq[flow]++;\n \t\tflow_id_save(flow, m, &ev);\n \n \t\twhile (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) {\ndiff --git a/app/test-eventdev/test_order_common.h b/app/test-eventdev/test_order_common.h\nindex 9e3415e421..d4ad31da46 100644\n--- a/app/test-eventdev/test_order_common.h\n+++ b/app/test-eventdev/test_order_common.h\n@@ -89,9 +89,10 @@ order_process_stage_1(struct test_order *const t,\n {\n \tconst uint32_t flow = (uintptr_t)ev->mbuf % nb_flows;\n \t/* compare the seqn against expected value */\n-\tif (ev->mbuf->seqn != expected_flow_seq[flow]) {\n+\tif (*rte_event_test_seqn(ev->mbuf) != expected_flow_seq[flow]) {\n \t\tevt_err(\"flow=%x seqn mismatch got=%x expected=%x\",\n-\t\t\tflow, ev->mbuf->seqn, expected_flow_seq[flow]);\n+\t\t\tflow, *rte_event_test_seqn(ev->mbuf),\n+\t\t\texpected_flow_seq[flow]);\n \t\tt->err = true;\n \t\trte_smp_wmb();\n \t}\ndiff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c\nindex 7a2b7ded25..b99889e2cc 100644\n--- a/drivers/event/octeontx/ssovf_evdev_selftest.c\n+++ b/drivers/event/octeontx/ssovf_evdev_selftest.c\n@@ -300,7 +300,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,\n \t\tm = rte_pktmbuf_alloc(eventdev_test_mempool);\n \t\tRTE_TEST_ASSERT_NOT_NULL(m, \"mempool alloc failed\");\n \n-\t\tm->seqn = i;\n+\t\t*rte_event_test_seqn(m) = i;\n \t\tupdate_event_and_validation_attr(m, &ev, flow_id, event_type,\n \t\t\tsub_event_type, sched_type, queue, port);\n \t\trte_event_enqueue_burst(evdev, port, &ev, 1);\n@@ -320,7 +320,8 @@ check_excess_events(uint8_t port)\n \t\tvalid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);\n \n \t\tRTE_TEST_ASSERT_SUCCESS(valid_event,\n-\t\t\t\t\"Unexpected valid event=%d\", ev.mbuf->seqn);\n+\t\t\t\"Unexpected valid event=%d\",\n+\t\t\t*rte_event_test_seqn(ev.mbuf));\n \t}\n \treturn 0;\n }\n@@ -425,8 +426,9 @@ static int\n validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)\n {\n \tRTE_SET_USED(port);\n-\tRTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, \"index=%d != seqn=%d\",\n-\t\t\tindex, ev->mbuf->seqn);\n+\tRTE_TEST_ASSERT_EQUAL(index, *rte_event_test_seqn(ev->mbuf),\n+\t\t\"index=%d != seqn=%d\", index,\n+\t\t*rte_event_test_seqn(ev->mbuf));\n \treturn 0;\n }\n \n@@ -509,10 +511,10 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)\n \n \texpected_val += ev->queue_id;\n \tRTE_SET_USED(port);\n-\tRTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val,\n-\t\"seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d\",\n-\t\t\tev->mbuf->seqn, index, expected_val, range,\n-\t\t\tqueue_count, MAX_EVENTS);\n+\tRTE_TEST_ASSERT_EQUAL(*rte_event_test_seqn(ev->mbuf), expected_val,\n+\t\t\"seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d\",\n+\t\t*rte_event_test_seqn(ev->mbuf), index, expected_val, range,\n+\t\tqueue_count, MAX_EVENTS);\n \treturn 0;\n }\n \n@@ -537,7 +539,7 @@ test_multi_queue_priority(void)\n \t\tm = rte_pktmbuf_alloc(eventdev_test_mempool);\n \t\tRTE_TEST_ASSERT_NOT_NULL(m, \"mempool alloc failed\");\n \n-\t\tm->seqn = i;\n+\t\t*rte_event_test_seqn(m) = i;\n \t\tqueue = i % queue_count;\n \t\tupdate_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,\n \t\t\t0, RTE_SCHED_TYPE_PARALLEL, queue, 0);\n@@ -904,7 +906,7 @@ worker_flow_based_pipeline(void *arg)\n \t\t\tev.op = RTE_EVENT_OP_FORWARD;\n \t\t\trte_event_enqueue_burst(evdev, port, &ev, 1);\n \t\t} else if (ev.sub_event_type == 1) { /* Events from stage 1*/\n-\t\t\tif (seqn_list_update(ev.mbuf->seqn) == 0) {\n+\t\t\tif (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) {\n \t\t\t\trte_pktmbuf_free(ev.mbuf);\n \t\t\t\trte_atomic32_sub(total_events, 1);\n \t\t\t} else {\n@@ -939,7 +941,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type,\n \t\treturn 0;\n \t}\n \n-\t/* Injects events with m->seqn=0 to total_events */\n+\t/* Injects events with a 0 sequence number to total_events */\n \tret = inject_events(\n \t\t0x1 /*flow_id */,\n \t\tRTE_EVENT_TYPE_CPU /* event_type */,\n@@ -1059,7 +1061,7 @@ worker_group_based_pipeline(void *arg)\n \t\t\tev.op = RTE_EVENT_OP_FORWARD;\n \t\t\trte_event_enqueue_burst(evdev, port, &ev, 1);\n \t\t} else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/\n-\t\t\tif (seqn_list_update(ev.mbuf->seqn) == 0) {\n+\t\t\tif (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) {\n \t\t\t\trte_pktmbuf_free(ev.mbuf);\n \t\t\t\trte_atomic32_sub(total_events, 1);\n \t\t\t} else {\n@@ -1101,7 +1103,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type,\n \t\treturn 0;\n \t}\n \n-\t/* Injects events with m->seqn=0 to total_events */\n+\t/* Injects events with a 0 sequence number to total_events */\n \tret = inject_events(\n \t\t0x1 /*flow_id */,\n \t\tRTE_EVENT_TYPE_CPU /* event_type */,\n@@ -1238,7 +1240,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))\n \t\treturn 0;\n \t}\n \n-\t/* Injects events with m->seqn=0 to total_events */\n+\t/* Injects events with a 0 sequence number to total_events */\n \tret = inject_events(\n \t\t0x1 /*flow_id */,\n \t\tRTE_EVENT_TYPE_CPU /* event_type */,\n@@ -1360,7 +1362,7 @@ worker_ordered_flow_producer(void *arg)\n \t\tif (m == NULL)\n \t\t\tcontinue;\n \n-\t\tm->seqn = counter++;\n+\t\t*rte_event_test_seqn(m) = counter++;\n \n \t\tstruct rte_event ev = {.event = 0, .u64 = 0};\n \ndiff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c\nindex 334a9ccb7c..c6381ac785 100644\n--- a/drivers/event/octeontx2/otx2_evdev_selftest.c\n+++ b/drivers/event/octeontx2/otx2_evdev_selftest.c\n@@ -279,7 +279,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,\n \t\tm = rte_pktmbuf_alloc(eventdev_test_mempool);\n \t\tRTE_TEST_ASSERT_NOT_NULL(m, \"mempool alloc failed\");\n \n-\t\tm->seqn = i;\n+\t\t*rte_event_test_seqn(m) = i;\n \t\tupdate_event_and_validation_attr(m, &ev, flow_id, event_type,\n \t\t\t\t\t\t sub_event_type, sched_type,\n \t\t\t\t\t\t queue, port);\n@@ -301,7 +301,7 @@ check_excess_events(uint8_t port)\n \n \t\tRTE_TEST_ASSERT_SUCCESS(valid_event,\n \t\t\t\t\t\"Unexpected valid event=%d\",\n-\t\t\t\t\tev.mbuf->seqn);\n+\t\t\t\t\t*rte_event_test_seqn(ev.mbuf));\n \t}\n \treturn 0;\n }\n@@ -406,8 +406,9 @@ static int\n validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)\n {\n \tRTE_SET_USED(port);\n-\tRTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, \"index=%d != seqn=%d\",\n-\t\t\t      index, ev->mbuf->seqn);\n+\tRTE_TEST_ASSERT_EQUAL(index, *rte_event_test_seqn(ev->mbuf),\n+\t\t\"index=%d != seqn=%d\",\n+\t\tindex, *rte_event_test_seqn(ev->mbuf));\n \treturn 0;\n }\n \n@@ -493,10 +494,10 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)\n \n \texpected_val += ev->queue_id;\n \tRTE_SET_USED(port);\n-\tRTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val,\n-\t\"seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d\",\n-\t\t\t      ev->mbuf->seqn, index, expected_val, range,\n-\t\t\t      queue_count, MAX_EVENTS);\n+\tRTE_TEST_ASSERT_EQUAL(*rte_event_test_seqn(ev->mbuf), expected_val,\n+\t\t\"seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d\",\n+\t\t*rte_event_test_seqn(ev->mbuf), index, expected_val, range,\n+\t\tqueue_count, MAX_EVENTS);\n \treturn 0;\n }\n \n@@ -523,7 +524,7 @@ test_multi_queue_priority(void)\n \t\tm = rte_pktmbuf_alloc(eventdev_test_mempool);\n \t\tRTE_TEST_ASSERT_NOT_NULL(m, \"mempool alloc failed\");\n \n-\t\tm->seqn = i;\n+\t\t*rte_event_test_seqn(m) = i;\n \t\tqueue = i % queue_count;\n \t\tupdate_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,\n \t\t\t\t\t\t 0, RTE_SCHED_TYPE_PARALLEL,\n@@ -888,7 +889,7 @@ worker_flow_based_pipeline(void *arg)\n \t\t\tev.op = RTE_EVENT_OP_FORWARD;\n \t\t\trte_event_enqueue_burst(evdev, port, &ev, 1);\n \t\t} else if (ev.sub_event_type == 1) { /* Events from stage 1*/\n-\t\t\tif (seqn_list_update(ev.mbuf->seqn) == 0) {\n+\t\t\tif (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) {\n \t\t\t\trte_pktmbuf_free(ev.mbuf);\n \t\t\t\trte_atomic32_sub(total_events, 1);\n \t\t\t} else {\n@@ -923,7 +924,7 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type,\n \t\treturn 0;\n \t}\n \n-\t/* Injects events with m->seqn=0 to total_events */\n+\t/* Injects events with a 0 sequence number to total_events */\n \tret = inject_events(0x1 /*flow_id */,\n \t\t\t    RTE_EVENT_TYPE_CPU /* event_type */,\n \t\t\t    0 /* sub_event_type (stage 0) */,\n@@ -1043,7 +1044,7 @@ worker_group_based_pipeline(void *arg)\n \t\t\tev.op = RTE_EVENT_OP_FORWARD;\n \t\t\trte_event_enqueue_burst(evdev, port, &ev, 1);\n \t\t} else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/\n-\t\t\tif (seqn_list_update(ev.mbuf->seqn) == 0) {\n+\t\t\tif (seqn_list_update(*rte_event_test_seqn(ev.mbuf)) == 0) {\n \t\t\t\trte_pktmbuf_free(ev.mbuf);\n \t\t\t\trte_atomic32_sub(total_events, 1);\n \t\t\t} else {\n@@ -1084,7 +1085,7 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type,\n \t\treturn 0;\n \t}\n \n-\t/* Injects events with m->seqn=0 to total_events */\n+\t/* Injects events with a 0 sequence number to total_events */\n \tret = inject_events(0x1 /*flow_id */,\n \t\t\t    RTE_EVENT_TYPE_CPU /* event_type */,\n \t\t\t    0 /* sub_event_type (stage 0) */,\n@@ -1222,7 +1223,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))\n \t\treturn 0;\n \t}\n \n-\t/* Injects events with m->seqn=0 to total_events */\n+\t/* Injects events with a 0 sequence number to total_events */\n \tret = inject_events(0x1 /*flow_id */,\n \t\t\t    RTE_EVENT_TYPE_CPU /* event_type */,\n \t\t\t    0 /* sub_event_type (stage 0) */,\n@@ -1348,7 +1349,7 @@ worker_ordered_flow_producer(void *arg)\n \t\tif (m == NULL)\n \t\t\tcontinue;\n \n-\t\tm->seqn = counter++;\n+\t\t*rte_event_test_seqn(m) = counter++;\n \n \t\tstruct rte_event ev = {.event = 0, .u64 = 0};\n \ndiff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c\nindex e7a32fbd31..cbf33d38f7 100644\n--- a/drivers/event/opdl/opdl_test.c\n+++ b/drivers/event/opdl/opdl_test.c\n@@ -256,7 +256,7 @@ ordered_basic(struct test *t)\n \t\tev.queue_id = t->qid[0];\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.mbuf = mbufs[i];\n-\t\tmbufs[i]->seqn = MAGIC_SEQN + i;\n+\t\t*rte_event_test_seqn(mbufs[i]) = MAGIC_SEQN + i;\n \n \t\t/* generate pkt and enqueue */\n \t\terr = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);\n@@ -281,7 +281,7 @@ ordered_basic(struct test *t)\n \t\t\trte_event_dev_dump(evdev, stdout);\n \t\t\treturn -1;\n \t\t}\n-\t\tseq = deq_ev[i].mbuf->seqn  - MAGIC_SEQN;\n+\t\tseq = *rte_event_test_seqn(deq_ev[i].mbuf)  - MAGIC_SEQN;\n \n \t\tif (seq != (i-1)) {\n \t\t\tPMD_DRV_LOG(ERR, \" seq test failed ! eq is %d , \"\n@@ -396,7 +396,7 @@ atomic_basic(struct test *t)\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.flow_id = 1;\n \t\tev.mbuf = mbufs[i];\n-\t\tmbufs[i]->seqn = MAGIC_SEQN + i;\n+\t\t*rte_event_test_seqn(mbufs[i]) = MAGIC_SEQN + i;\n \n \t\t/* generate pkt and enqueue */\n \t\terr = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);\n@@ -625,7 +625,7 @@ single_link_w_stats(struct test *t)\n \t\tev.queue_id = t->qid[0];\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.mbuf = mbufs[i];\n-\t\tmbufs[i]->seqn = 1234 + i;\n+\t\t*rte_event_test_seqn(mbufs[i]) = 1234 + i;\n \n \t\t/* generate pkt and enqueue */\n \t\terr = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);\ndiff --git a/drivers/event/sw/sw_evdev_selftest.c b/drivers/event/sw/sw_evdev_selftest.c\nindex ad4fc0eed7..47f5b55651 100644\n--- a/drivers/event/sw/sw_evdev_selftest.c\n+++ b/drivers/event/sw/sw_evdev_selftest.c\n@@ -380,7 +380,7 @@ run_prio_packet_test(struct test *t)\n \t\t\tprintf(\"%d: gen of pkt failed\\n\", __LINE__);\n \t\t\treturn -1;\n \t\t}\n-\t\tarp->seqn = MAGIC_SEQN[i];\n+\t\t*rte_event_test_seqn(arp) = MAGIC_SEQN[i];\n \n \t\tev = (struct rte_event){\n \t\t\t.priority = PRIORITY[i],\n@@ -419,7 +419,7 @@ run_prio_packet_test(struct test *t)\n \t\trte_event_dev_dump(evdev, stdout);\n \t\treturn -1;\n \t}\n-\tif (ev.mbuf->seqn != MAGIC_SEQN[1]) {\n+\tif (*rte_event_test_seqn(ev.mbuf) != MAGIC_SEQN[1]) {\n \t\tprintf(\"%d: first packet out not highest priority\\n\",\n \t\t\t\t__LINE__);\n \t\trte_event_dev_dump(evdev, stdout);\n@@ -433,7 +433,7 @@ run_prio_packet_test(struct test *t)\n \t\trte_event_dev_dump(evdev, stdout);\n \t\treturn -1;\n \t}\n-\tif (ev2.mbuf->seqn != MAGIC_SEQN[0]) {\n+\tif (*rte_event_test_seqn(ev2.mbuf) != MAGIC_SEQN[0]) {\n \t\tprintf(\"%d: second packet out not lower priority\\n\",\n \t\t\t\t__LINE__);\n \t\trte_event_dev_dump(evdev, stdout);\n@@ -477,7 +477,7 @@ test_single_directed_packet(struct test *t)\n \t}\n \n \tconst uint32_t MAGIC_SEQN = 4711;\n-\tarp->seqn = MAGIC_SEQN;\n+\t*rte_event_test_seqn(arp) = MAGIC_SEQN;\n \n \t/* generate pkt and enqueue */\n \terr = rte_event_enqueue_burst(evdev, rx_enq, &ev, 1);\n@@ -516,7 +516,7 @@ test_single_directed_packet(struct test *t)\n \t\treturn -1;\n \t}\n \n-\tif (ev.mbuf->seqn != MAGIC_SEQN) {\n+\tif (*rte_event_test_seqn(ev.mbuf) != MAGIC_SEQN) {\n \t\tprintf(\"%d: error magic sequence number not dequeued\\n\",\n \t\t\t\t__LINE__);\n \t\treturn -1;\n@@ -934,7 +934,7 @@ xstats_tests(struct test *t)\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.mbuf = arp;\n \t\tev.flow_id = 7;\n-\t\tarp->seqn = i;\n+\t\t*rte_event_test_seqn(arp) = i;\n \n \t\tint err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);\n \t\tif (err != 1) {\n@@ -1485,7 +1485,7 @@ xstats_id_reset_tests(struct test *t)\n \t\tev.queue_id = t->qid[i];\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.mbuf = arp;\n-\t\tarp->seqn = i;\n+\t\t*rte_event_test_seqn(arp) = i;\n \n \t\tint err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);\n \t\tif (err != 1) {\n@@ -1873,7 +1873,7 @@ qid_priorities(struct test *t)\n \t\tev.queue_id = t->qid[i];\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.mbuf = arp;\n-\t\tarp->seqn = i;\n+\t\t*rte_event_test_seqn(arp) = i;\n \n \t\tint err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);\n \t\tif (err != 1) {\n@@ -1894,7 +1894,7 @@ qid_priorities(struct test *t)\n \t\treturn -1;\n \t}\n \tfor (i = 0; i < 3; i++) {\n-\t\tif (ev[i].mbuf->seqn != 2-i) {\n+\t\tif (*rte_event_test_seqn(ev[i].mbuf) != 2-i) {\n \t\t\tprintf(\n \t\t\t\t\"%d: qid priority test: seqn %d incorrectly prioritized\\n\",\n \t\t\t\t\t__LINE__, i);\n@@ -2371,7 +2371,7 @@ single_packet(struct test *t)\n \tev.mbuf = arp;\n \tev.queue_id = 0;\n \tev.flow_id = 3;\n-\tarp->seqn = MAGIC_SEQN;\n+\t*rte_event_test_seqn(arp) = MAGIC_SEQN;\n \n \terr = rte_event_enqueue_burst(evdev, t->port[rx_enq], &ev, 1);\n \tif (err != 1) {\n@@ -2411,7 +2411,7 @@ single_packet(struct test *t)\n \t}\n \n \terr = test_event_dev_stats_get(evdev, &stats);\n-\tif (ev.mbuf->seqn != MAGIC_SEQN) {\n+\tif (*rte_event_test_seqn(ev.mbuf) != MAGIC_SEQN) {\n \t\tprintf(\"%d: magic sequence number not dequeued\\n\", __LINE__);\n \t\treturn -1;\n \t}\n@@ -2684,7 +2684,7 @@ parallel_basic(struct test *t, int check_order)\n \t\tev.queue_id = t->qid[0];\n \t\tev.op = RTE_EVENT_OP_NEW;\n \t\tev.mbuf = mbufs[i];\n-\t\tmbufs[i]->seqn = MAGIC_SEQN + i;\n+\t\t*rte_event_test_seqn(mbufs[i]) = MAGIC_SEQN + i;\n \n \t\t/* generate pkt and enqueue */\n \t\terr = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);\n@@ -2739,10 +2739,12 @@ parallel_basic(struct test *t, int check_order)\n \t/* Check to see if the sequence numbers are in expected order */\n \tif (check_order) {\n \t\tfor (j = 0 ; j < deq_pkts ; j++) {\n-\t\t\tif (deq_ev[j].mbuf->seqn != MAGIC_SEQN + j) {\n-\t\t\t\tprintf(\n-\t\t\t\t\t\"%d: Incorrect sequence number(%d) from port %d\\n\",\n-\t\t\t\t\t__LINE__, mbufs_out[j]->seqn, tx_port);\n+\t\t\tif (*rte_event_test_seqn(deq_ev[j].mbuf) !=\n+\t\t\t\t\tMAGIC_SEQN + j) {\n+\t\t\t\tprintf(\"%d: Incorrect sequence number(%d) from port %d\\n\",\n+\t\t\t\t\t__LINE__,\n+\t\t\t\t\t*rte_event_test_seqn(mbufs_out[j]),\n+\t\t\t\t\ttx_port);\n \t\t\t\treturn -1;\n \t\t\t}\n \t\t}\ndiff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c\nindex 322453c532..61ff6d3404 100644\n--- a/lib/librte_eventdev/rte_eventdev.c\n+++ b/lib/librte_eventdev/rte_eventdev.c\n@@ -109,6 +109,22 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)\n \treturn 0;\n }\n \n+#define RTE_EVENT_TEST_SEQN_DYNFIELD_NAME \"rte_event_test_seqn_dynfield\"\n+int rte_event_test_seqn_dynfield_offset = -1;\n+\n+int\n+rte_event_test_seqn_dynfield_register(void)\n+{\n+\tstatic const struct rte_mbuf_dynfield event_test_seqn_dynfield_desc = {\n+\t\t.name = RTE_EVENT_TEST_SEQN_DYNFIELD_NAME,\n+\t\t.size = sizeof(rte_event_test_seqn_t),\n+\t\t.align = __alignof__(rte_event_test_seqn_t),\n+\t};\n+\trte_event_test_seqn_dynfield_offset =\n+\t\trte_mbuf_dynfield_register(&event_test_seqn_dynfield_desc);\n+\treturn rte_event_test_seqn_dynfield_offset;\n+}\n+\n int\n rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id,\n \t\t\t\tuint32_t *caps)\n@@ -1247,8 +1263,11 @@ int rte_event_dev_selftest(uint8_t dev_id)\n \tRTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);\n \tstruct rte_eventdev *dev = &rte_eventdevs[dev_id];\n \n-\tif (dev->dev_ops->dev_selftest != NULL)\n+\tif (dev->dev_ops->dev_selftest != NULL) {\n+\t\tif (rte_event_test_seqn_dynfield_register() < 0)\n+\t\t\treturn -ENOMEM;\n \t\treturn (*dev->dev_ops->dev_selftest)();\n+\t}\n \treturn -ENOTSUP;\n }\n \ndiff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h\nindex ce1fc2ce0f..1656ff8dce 100644\n--- a/lib/librte_eventdev/rte_eventdev.h\n+++ b/lib/librte_eventdev/rte_eventdev.h\n@@ -211,13 +211,15 @@ extern \"C\" {\n #endif\n \n #include <rte_common.h>\n+#include <rte_compat.h>\n #include <rte_config.h>\n+#include <rte_mbuf.h>\n+#include <rte_mbuf_dyn.h>\n #include <rte_memory.h>\n #include <rte_errno.h>\n \n #include \"rte_eventdev_trace_fp.h\"\n \n-struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */\n struct rte_event;\n \n /* Event device capability bitmap flags */\n@@ -570,9 +572,9 @@ struct rte_event_queue_conf {\n \t */\n \tuint32_t nb_atomic_order_sequences;\n \t/**< The maximum number of outstanding events waiting to be\n-\t * reordered by this queue. In other words, the number of entries in\n-\t * this queue’s reorder buffer.When the number of events in the\n-\t * reorder buffer reaches to *nb_atomic_order_sequences* then the\n+\t * event_tested by this queue. In other words, the number of entries in\n+\t * this queue’s event_test buffer.When the number of events in the\n+\t * event_test buffer reaches to *nb_atomic_order_sequences* then the\n \t * scheduler cannot schedule the events from this queue and invalid\n \t * event will be returned from dequeue until one or more entries are\n \t * freed up/released.\n@@ -935,7 +937,7 @@ rte_event_dev_close(uint8_t dev_id);\n  * Event ordering is based on the received event(s), but also other\n  * (newly allocated or stored) events are ordered when enqueued within the same\n  * ordered context. Events not enqueued (e.g. released or stored) within the\n- * context are  considered missing from reordering and are skipped at this time\n+ * context are  considered missing from event_testing and are skipped at this time\n  * (but can be ordered again within another context).\n  *\n  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE\n@@ -1021,7 +1023,7 @@ rte_event_dev_close(uint8_t dev_id);\n  * then this function hints the scheduler that the user has done all that need\n  * to maintain event order in the current ordered context.\n  * The scheduler is allowed to release the ordered context of this port and\n- * avoid reordering any following enqueues.\n+ * avoid event_testing any following enqueues.\n  *\n  * Early ordered context release may increase parallelism and thus system\n  * performance.\n@@ -1111,6 +1113,30 @@ struct rte_event {\n \t};\n };\n \n+typedef uint32_t rte_event_test_seqn_t;\n+extern int rte_event_test_seqn_dynfield_offset;\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * Read test sequence number from mbuf.\n+ *\n+ * @param mbuf Structure to read from.\n+ * @return pointer to test sequence number.\n+ */\n+__rte_experimental\n+static inline rte_event_test_seqn_t *\n+rte_event_test_seqn(const struct rte_mbuf *mbuf)\n+{\n+\treturn RTE_MBUF_DYNFIELD(mbuf, rte_event_test_seqn_dynfield_offset,\n+\t\trte_event_test_seqn_t *);\n+}\n+\n+__rte_experimental\n+int\n+rte_event_test_seqn_dynfield_register(void);\n+\n /* Ethdev Rx adapter capability bitmap flags */\n #define RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT\t0x1\n /**< This flag is sent when the packet transfer mechanism is in HW.\ndiff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map\nindex 8ae8420f9b..e49382ba99 100644\n--- a/lib/librte_eventdev/version.map\n+++ b/lib/librte_eventdev/version.map\n@@ -138,4 +138,6 @@ EXPERIMENTAL {\n \t__rte_eventdev_trace_port_setup;\n \t# added in 20.11\n \trte_event_pmd_pci_probe_named;\n+\trte_event_test_seqn_dynfield_offset;\n+\trte_event_test_seqn_dynfield_register;\n };\n",
    "prefixes": [
        "7/8"
    ]
}