get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/88678/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 88678,
    "url": "http://patches.dpdk.org/api/patches/88678/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20210306162942.6845-37-pbhagavatula@marvell.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20210306162942.6845-37-pbhagavatula@marvell.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20210306162942.6845-37-pbhagavatula@marvell.com",
    "date": "2021-03-06T16:29:41",
    "name": "[36/36] event/cnxk: add Tx adapter fastpath ops",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "e51551ce0b9f0ee5647992fc9998fadd276cc735",
    "submitter": {
        "id": 1183,
        "url": "http://patches.dpdk.org/api/people/1183/?format=api",
        "name": "Pavan Nikhilesh Bhagavatula",
        "email": "pbhagavatula@marvell.com"
    },
    "delegate": {
        "id": 310,
        "url": "http://patches.dpdk.org/api/users/310/?format=api",
        "username": "jerin",
        "first_name": "Jerin",
        "last_name": "Jacob",
        "email": "jerinj@marvell.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20210306162942.6845-37-pbhagavatula@marvell.com/mbox/",
    "series": [
        {
            "id": 15516,
            "url": "http://patches.dpdk.org/api/series/15516/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=15516",
            "date": "2021-03-06T16:29:05",
            "name": "Marvell CNXK Event device Driver",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/15516/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/88678/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/88678/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id BEC8DA0548;\n\tSat,  6 Mar 2021 17:36:33 +0100 (CET)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 1C4E122A52F;\n\tSat,  6 Mar 2021 17:32:12 +0100 (CET)",
            "from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com\n [67.231.148.174])\n by mails.dpdk.org (Postfix) with ESMTP id 772BB22A44F\n for <dev@dpdk.org>; Sat,  6 Mar 2021 17:32:07 +0100 (CET)",
            "from pps.filterd (m0045849.ppops.net [127.0.0.1])\n by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id\n 126GRndB028142 for <dev@dpdk.org>; Sat, 6 Mar 2021 08:32:06 -0800",
            "from dc5-exch01.marvell.com ([199.233.59.181])\n by mx0a-0016f401.pphosted.com with ESMTP id 3747yurf04-7\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT)\n for <dev@dpdk.org>; Sat, 06 Mar 2021 08:32:06 -0800",
            "from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2;\n Sat, 6 Mar 2021 08:32:03 -0800",
            "from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH03.marvell.com\n (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2;\n Sat, 6 Mar 2021 08:32:02 -0800",
            "from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend\n Transport; Sat, 6 Mar 2021 08:32:02 -0800",
            "from BG-LT7430.marvell.com (unknown [10.193.68.121])\n by maili.marvell.com (Postfix) with ESMTP id ED0AD3F703F;\n Sat,  6 Mar 2021 08:32:00 -0800 (PST)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-transfer-encoding : content-type; s=pfpt0220;\n bh=TOnPOe70js+bosGjklKgB9e1HesZLxFA6qvwhUuI75Q=;\n b=bDUVWll/AUDbPInuPHEplOfAWFMI88A+YNByIuc7zHwa09bq+78cYdzVeO1bCUZMie6Z\n km9SJmKu6xue3EeV+pC2XEEw0JugldmY8qmCI7dPZ0D1Avcm0ejNKGaKYsUOZAK3ll7y\n FRgbOlI4Zi0KapZSVaGh4NL8w9PT03hqkHXhHhj42RDjlLsmG6n0j1PXdQxcxSxFsyzd\n iV1sDO5U5XB65kN32fxCKDjhXMA30fRmhXAGFXATyhD+GWVHRGh2posut/0Hptvh7VXl\n qdzZVSfljpan9utwhHsIJAyYNkGB4cjN8QjADQYfwB7h9Ow6mBKe85mfyri4KPENBVeA 9g==",
        "From": "<pbhagavatula@marvell.com>",
        "To": "<jerinj@marvell.com>, Pavan Nikhilesh <pbhagavatula@marvell.com>, \"Shijith\n Thotton\" <sthotton@marvell.com>",
        "CC": "<ndabilpuram@marvell.com>, <dev@dpdk.org>",
        "Date": "Sat, 6 Mar 2021 21:59:41 +0530",
        "Message-ID": "<20210306162942.6845-37-pbhagavatula@marvell.com>",
        "X-Mailer": "git-send-email 2.17.1",
        "In-Reply-To": "<20210306162942.6845-1-pbhagavatula@marvell.com>",
        "References": "<20210306162942.6845-1-pbhagavatula@marvell.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "Content-Type": "text/plain",
        "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10434:6.0.369, 18.0.761\n definitions=2021-03-06_08:2021-03-03,\n 2021-03-06 signatures=0",
        "Subject": "[dpdk-dev] [PATCH 36/36] event/cnxk: add Tx adapter fastpath ops",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Pavan Nikhilesh <pbhagavatula@marvell.com>\n\nAdd support for event eth Tx adapter fastpath operations.\n\nSigned-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>\n---\n drivers/event/cnxk/cn10k_eventdev.c | 35 ++++++++++++\n drivers/event/cnxk/cn10k_worker.c   | 32 +++++++++++\n drivers/event/cnxk/cn10k_worker.h   | 67 ++++++++++++++++++++++\n drivers/event/cnxk/cn9k_eventdev.c  | 76 +++++++++++++++++++++++++\n drivers/event/cnxk/cn9k_worker.c    | 60 ++++++++++++++++++++\n drivers/event/cnxk/cn9k_worker.h    | 87 +++++++++++++++++++++++++++++\n 6 files changed, 357 insertions(+)",
    "diff": "diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c\nindex 3662fd720..817dcc7cc 100644\n--- a/drivers/event/cnxk/cn10k_eventdev.c\n+++ b/drivers/event/cnxk/cn10k_eventdev.c\n@@ -336,6 +336,22 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)\n #undef R\n \t};\n \n+\t/* Tx modes */\n+\tconst event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = {\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\t[f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,\n+\t\tNIX_TX_FASTPATH_MODES\n+#undef T\n+\t};\n+\n+\tconst event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] =\n+\t\t{\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\t[f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,\n+\t\t\tNIX_TX_FASTPATH_MODES\n+#undef T\n+\t\t};\n+\n \tevent_dev->enqueue = cn10k_sso_hws_enq;\n \tevent_dev->enqueue_burst = cn10k_sso_hws_enq_burst;\n \tevent_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst;\n@@ -395,6 +411,25 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)\n \t\t\t\t[!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];\n \t\t}\n \t}\n+\n+\tif (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {\n+\t\t/* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */\n+\t\tevent_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];\n+\t} else {\n+\t\tevent_dev->txa_enqueue = sso_hws_tx_adptr_enq\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];\n+\t}\n+\n+\tevent_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;\n }\n \n static void\ndiff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c\nindex 46f72cf20..ab149c5e3 100644\n--- a/drivers/event/cnxk/cn10k_worker.c\n+++ b/drivers/event/cnxk/cn10k_worker.c\n@@ -175,3 +175,35 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[],\n \n NIX_RX_FASTPATH_MODES\n #undef R\n+\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n+\t{                                                                      \\\n+\t\tstruct cn10k_sso_hws *ws = port;                               \\\n+\t\tuint64_t cmd[sz];                                              \\\n+                                                                               \\\n+\t\tRTE_SET_USED(nb_events);                                       \\\n+\t\treturn cn10k_sso_hws_event_tx(                                 \\\n+\t\t\tws, &ev[0], cmd,                                       \\\n+\t\t\t(const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) &         \\\n+\t\t\t\tws->tx_adptr_data,                             \\\n+\t\t\tflags);                                                \\\n+\t}                                                                      \\\n+                                                                               \\\n+\tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n+\t{                                                                      \\\n+\t\tuint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2];           \\\n+\t\tstruct cn10k_sso_hws *ws = port;                               \\\n+                                                                               \\\n+\t\tRTE_SET_USED(nb_events);                                       \\\n+\t\treturn cn10k_sso_hws_event_tx(                                 \\\n+\t\t\tws, &ev[0], cmd,                                       \\\n+\t\t\t(const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) &         \\\n+\t\t\t\tws->tx_adptr_data,                             \\\n+\t\t\t(flags) | NIX_TX_MULTI_SEG_F);                         \\\n+\t}\n+\n+NIX_TX_FASTPATH_MODES\n+#undef T\ndiff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h\nindex 9521a5c94..ebfd5dee9 100644\n--- a/drivers/event/cnxk/cn10k_worker.h\n+++ b/drivers/event/cnxk/cn10k_worker.h\n@@ -11,6 +11,7 @@\n \n #include \"cn10k_ethdev.h\"\n #include \"cn10k_rx.h\"\n+#include \"cn10k_tx.h\"\n \n /* SSO Operations */\n \n@@ -239,4 +240,70 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port,\n NIX_RX_FASTPATH_MODES\n #undef R\n \n+static __rte_always_inline const struct cn10k_eth_txq *\n+cn10k_sso_hws_xtract_meta(struct rte_mbuf *m,\n+\t\t\t  const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])\n+{\n+\treturn (const struct cn10k_eth_txq *)\n+\t\ttxq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)];\n+}\n+\n+static __rte_always_inline uint16_t\n+cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,\n+\t\t       uint64_t *cmd,\n+\t\t       const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],\n+\t\t       const uint32_t flags)\n+{\n+\tconst struct cn10k_eth_txq *txq;\n+\tstruct rte_mbuf *m = ev->mbuf;\n+\tuint16_t ref_cnt = m->refcnt;\n+\tuintptr_t lmt_addr;\n+\tuint16_t lmt_id;\n+\tuintptr_t pa;\n+\n+\tlmt_addr = ws->lmt_base;\n+\tROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);\n+\ttxq = cn10k_sso_hws_xtract_meta(m, txq_data);\n+\tcn10k_nix_tx_skeleton(txq, cmd, flags);\n+\t/* Perform header writes before barrier for TSO */\n+\tif (flags & NIX_TX_OFFLOAD_TSO_F)\n+\t\tcn10k_nix_xmit_prepare_tso(m, flags);\n+\n+\tcn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags);\n+\tif (flags & NIX_TX_MULTI_SEG_F) {\n+\t\tconst uint16_t segdw =\n+\t\t\tcn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);\n+\t\tpa = txq->io_addr | ((segdw - 1) << 4);\n+\t} else {\n+\t\tpa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4;\n+\t}\n+\tif (!ev->sched_type)\n+\t\tcnxk_sso_hws_head_wait(ws->base + SSOW_LF_GWS_TAG);\n+\n+\troc_lmt_submit_steorl(lmt_id, pa);\n+\n+\tif (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {\n+\t\tif (ref_cnt > 1)\n+\t\t\treturn 1;\n+\t}\n+\n+\tcnxk_sso_hws_swtag_flush(ws->base + SSOW_LF_GWS_TAG,\n+\t\t\t\t ws->base + SSOW_LF_GWS_OP_SWTAG_FLUSH);\n+\n+\treturn 1;\n+}\n+\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n+\tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n+\tuint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_##name(             \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n+\tuint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_seg_##name(         \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);\n+\n+NIX_TX_FASTPATH_MODES\n+#undef T\n+\n #endif\ndiff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c\nindex 33b3b6237..39e5e516d 100644\n--- a/drivers/event/cnxk/cn9k_eventdev.c\n+++ b/drivers/event/cnxk/cn9k_eventdev.c\n@@ -427,6 +427,38 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)\n #undef R\n \t\t};\n \n+\t/* Tx modes */\n+\tconst event_tx_adapter_enqueue sso_hws_tx_adptr_enq[2][2][2][2][2] = {\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\t[f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name,\n+\t\tNIX_TX_FASTPATH_MODES\n+#undef T\n+\t};\n+\n+\tconst event_tx_adapter_enqueue sso_hws_tx_adptr_enq_seg[2][2][2][2][2] =\n+\t\t{\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\t[f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name,\n+\t\t\tNIX_TX_FASTPATH_MODES\n+#undef T\n+\t\t};\n+\n+\tconst event_tx_adapter_enqueue\n+\t\tsso_hws_dual_tx_adptr_enq[2][2][2][2][2] = {\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\t[f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name,\n+\t\t\tNIX_TX_FASTPATH_MODES\n+#undef T\n+\t\t};\n+\n+\tconst event_tx_adapter_enqueue\n+\t\tsso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2] = {\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\t[f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name,\n+\t\t\tNIX_TX_FASTPATH_MODES\n+#undef T\n+\t\t};\n+\n \tevent_dev->enqueue = cn9k_sso_hws_enq;\n \tevent_dev->enqueue_burst = cn9k_sso_hws_enq_burst;\n \tevent_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst;\n@@ -487,6 +519,23 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)\n \t\t}\n \t}\n \n+\tif (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {\n+\t\t/* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */\n+\t\tevent_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];\n+\t} else {\n+\t\tevent_dev->txa_enqueue = sso_hws_tx_adptr_enq\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];\n+\t}\n+\n \tif (dev->dual_ws) {\n \t\tevent_dev->enqueue = cn9k_sso_hws_dual_enq;\n \t\tevent_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst;\n@@ -567,8 +616,35 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev)\n \t\t\t\t\t\t    NIX_RX_OFFLOAD_RSS_F)];\n \t\t\t}\n \t\t}\n+\n+\t\tif (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {\n+\t\t\t/* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM]\n+\t\t\t */\n+\t\t\tevent_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq_seg\n+\t\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_VLAN_QINQ_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_L3_L4_CSUM_F)];\n+\t\t} else {\n+\t\t\tevent_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq\n+\t\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_VLAN_QINQ_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]\n+\t\t\t\t[!!(dev->tx_offloads &\n+\t\t\t\t    NIX_TX_OFFLOAD_L3_L4_CSUM_F)];\n+\t\t}\n \t}\n \n+\tevent_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;\n \trte_mb();\n }\n \ndiff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c\nindex fb572c7c9..b19078618 100644\n--- a/drivers/event/cnxk/cn9k_worker.c\n+++ b/drivers/event/cnxk/cn9k_worker.c\n@@ -376,3 +376,63 @@ cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],\n \n NIX_RX_FASTPATH_MODES\n #undef R\n+\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\tuint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n+\t{                                                                      \\\n+\t\tstruct cn9k_sso_hws *ws = port;                                \\\n+\t\tuint64_t cmd[sz];                                              \\\n+                                                                               \\\n+\t\tRTE_SET_USED(nb_events);                                       \\\n+\t\treturn cn9k_sso_hws_event_tx(                                  \\\n+\t\t\tws->base, &ev[0], cmd,                                 \\\n+\t\t\t(const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) &         \\\n+\t\t\t\tws->tx_adptr_data,                             \\\n+\t\t\tflags);                                                \\\n+\t}                                                                      \\\n+                                                                               \\\n+\tuint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n+\t{                                                                      \\\n+\t\tuint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2];           \\\n+\t\tstruct cn9k_sso_hws *ws = port;                                \\\n+                                                                               \\\n+\t\tRTE_SET_USED(nb_events);                                       \\\n+\t\treturn cn9k_sso_hws_event_tx(                                  \\\n+\t\t\tws->base, &ev[0], cmd,                                 \\\n+\t\t\t(const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) &         \\\n+\t\t\t\tws->tx_adptr_data,                             \\\n+\t\t\t(flags) | NIX_TX_MULTI_SEG_F);                         \\\n+\t}                                                                      \\\n+                                                                               \\\n+\tuint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name(              \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n+\t{                                                                      \\\n+\t\tstruct cn9k_sso_hws_dual *ws = port;                           \\\n+\t\tuint64_t cmd[sz];                                              \\\n+                                                                               \\\n+\t\tRTE_SET_USED(nb_events);                                       \\\n+\t\treturn cn9k_sso_hws_event_tx(                                  \\\n+\t\t\tws->base[!ws->vws], &ev[0], cmd,                       \\\n+\t\t\t(const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) &         \\\n+\t\t\t\tws->tx_adptr_data,                             \\\n+\t\t\tflags);                                                \\\n+\t}                                                                      \\\n+                                                                               \\\n+\tuint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name(          \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n+\t{                                                                      \\\n+\t\tuint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2];           \\\n+\t\tstruct cn9k_sso_hws_dual *ws = port;                           \\\n+                                                                               \\\n+\t\tRTE_SET_USED(nb_events);                                       \\\n+\t\treturn cn9k_sso_hws_event_tx(                                  \\\n+\t\t\tws->base[!ws->vws], &ev[0], cmd,                       \\\n+\t\t\t(const uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) &         \\\n+\t\t\t\tws->tx_adptr_data,                             \\\n+\t\t\t(flags) | NIX_TX_MULTI_SEG_F);                         \\\n+\t}\n+\n+NIX_TX_FASTPATH_MODES\n+#undef T\ndiff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h\nindex bbdca3c95..382910a25 100644\n--- a/drivers/event/cnxk/cn9k_worker.h\n+++ b/drivers/event/cnxk/cn9k_worker.h\n@@ -11,6 +11,7 @@\n \n #include \"cn9k_ethdev.h\"\n #include \"cn9k_rx.h\"\n+#include \"cn9k_tx.h\"\n \n /* SSO Operations */\n \n@@ -400,4 +401,90 @@ NIX_RX_FASTPATH_MODES\n NIX_RX_FASTPATH_MODES\n #undef R\n \n+static __rte_always_inline const struct cn9k_eth_txq *\n+cn9k_sso_hws_xtract_meta(struct rte_mbuf *m,\n+\t\t\t const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])\n+{\n+\treturn (const struct cn9k_eth_txq *)\n+\t\ttxq_data[m->port][rte_event_eth_tx_adapter_txq_get(m)];\n+}\n+\n+static __rte_always_inline void\n+cn9k_sso_hws_prepare_pkt(const struct cn9k_eth_txq *txq, struct rte_mbuf *m,\n+\t\t\t uint64_t *cmd, const uint32_t flags)\n+{\n+\troc_lmt_mov(cmd, txq->cmd, cn9k_nix_tx_ext_subs(flags));\n+\tcn9k_nix_xmit_prepare(m, cmd, flags);\n+}\n+\n+static __rte_always_inline uint16_t\n+cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,\n+\t\t      const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],\n+\t\t      const uint32_t flags)\n+{\n+\tstruct rte_mbuf *m = ev->mbuf;\n+\tconst struct cn9k_eth_txq *txq;\n+\tuint16_t ref_cnt = m->refcnt;\n+\n+\t/* Perform header writes before barrier for TSO */\n+\tcn9k_nix_xmit_prepare_tso(m, flags);\n+\t/* Lets commit any changes in the packet here in case when\n+\t * fast free is set as no further changes will be made to mbuf.\n+\t * In case of fast free is not set, both cn9k_nix_prepare_mseg()\n+\t * and cn9k_nix_xmit_prepare() has a barrier after refcnt update.\n+\t */\n+\tif (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))\n+\t\trte_io_wmb();\n+\ttxq = cn9k_sso_hws_xtract_meta(m, txq_data);\n+\tcn9k_sso_hws_prepare_pkt(txq, m, cmd, flags);\n+\n+\tif (flags & NIX_TX_MULTI_SEG_F) {\n+\t\tconst uint16_t segdw = cn9k_nix_prepare_mseg(m, cmd, flags);\n+\t\tif (!ev->sched_type) {\n+\t\t\tcn9k_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw);\n+\t\t\tcnxk_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);\n+\t\t\tif (cn9k_nix_xmit_submit_lmt(txq->io_addr) == 0)\n+\t\t\t\tcn9k_nix_xmit_mseg_one(cmd, txq->lmt_addr,\n+\t\t\t\t\t\t       txq->io_addr, segdw);\n+\t\t} else {\n+\t\t\tcn9k_nix_xmit_mseg_one(cmd, txq->lmt_addr, txq->io_addr,\n+\t\t\t\t\t       segdw);\n+\t\t}\n+\t} else {\n+\t\tif (!ev->sched_type) {\n+\t\t\tcn9k_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags);\n+\t\t\tcnxk_sso_hws_head_wait(base + SSOW_LF_GWS_TAG);\n+\t\t\tif (cn9k_nix_xmit_submit_lmt(txq->io_addr) == 0)\n+\t\t\t\tcn9k_nix_xmit_one(cmd, txq->lmt_addr,\n+\t\t\t\t\t\t  txq->io_addr, flags);\n+\t\t} else {\n+\t\t\tcn9k_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr,\n+\t\t\t\t\t  flags);\n+\t\t}\n+\t}\n+\n+\tif (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {\n+\t\tif (ref_cnt > 1)\n+\t\t\treturn 1;\n+\t}\n+\n+\tcnxk_sso_hws_swtag_flush(base + SSOW_LF_GWS_TAG,\n+\t\t\t\t base + SSOW_LF_GWS_OP_SWTAG_FLUSH);\n+\n+\treturn 1;\n+}\n+\n+#define T(name, f4, f3, f2, f1, f0, sz, flags)                                 \\\n+\tuint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name(                   \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n+\tuint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name(               \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n+\tuint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name(              \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n+\tuint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name(          \\\n+\t\tvoid *port, struct rte_event ev[], uint16_t nb_events);\n+\n+NIX_TX_FASTPATH_MODES\n+#undef T\n+\n #endif\n",
    "prefixes": [
        "36/36"
    ]
}