get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/97746/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 97746,
    "url": "https://patches.dpdk.org/api/patches/97746/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/20210902021505.17607-21-ndabilpuram@marvell.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20210902021505.17607-21-ndabilpuram@marvell.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20210902021505.17607-21-ndabilpuram@marvell.com",
    "date": "2021-09-02T02:14:58",
    "name": "[20/27] net/cnxk: add cn10k Tx support for security offload",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "d7c64a451d956fcd2f7989e3fcf2883ac54f4f1c",
    "submitter": {
        "id": 1202,
        "url": "https://patches.dpdk.org/api/people/1202/?format=api",
        "name": "Nithin Dabilpuram",
        "email": "ndabilpuram@marvell.com"
    },
    "delegate": {
        "id": 310,
        "url": "https://patches.dpdk.org/api/users/310/?format=api",
        "username": "jerin",
        "first_name": "Jerin",
        "last_name": "Jacob",
        "email": "jerinj@marvell.com"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/20210902021505.17607-21-ndabilpuram@marvell.com/mbox/",
    "series": [
        {
            "id": 18612,
            "url": "https://patches.dpdk.org/api/series/18612/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=18612",
            "date": "2021-09-02T02:14:38",
            "name": "net/cnxk: support for inline ipsec",
            "version": 1,
            "mbox": "https://patches.dpdk.org/series/18612/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/97746/comments/",
    "check": "warning",
    "checks": "https://patches.dpdk.org/api/patches/97746/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 13A01A0C4D;\n\tThu,  2 Sep 2021 04:19:03 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 2B42941170;\n\tThu,  2 Sep 2021 04:17:49 +0200 (CEST)",
            "from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com\n [67.231.156.173])\n by mails.dpdk.org (Postfix) with ESMTP id E26684118F\n for <dev@dpdk.org>; Thu,  2 Sep 2021 04:17:47 +0200 (CEST)",
            "from pps.filterd (m0045851.ppops.net [127.0.0.1])\n by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 181HQCpr011801\n for <dev@dpdk.org>; Wed, 1 Sep 2021 19:17:47 -0700",
            "from dc5-exch02.marvell.com ([199.233.59.182])\n by mx0b-0016f401.pphosted.com with ESMTP id 3atdwq9hue-1\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT)\n for <dev@dpdk.org>; Wed, 01 Sep 2021 19:17:46 -0700",
            "from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18;\n Wed, 1 Sep 2021 19:17:44 -0700",
            "from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com\n (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend\n Transport; Wed, 1 Sep 2021 19:17:44 -0700",
            "from hyd1588t430.marvell.com (unknown [10.29.52.204])\n by maili.marvell.com (Postfix) with ESMTP id F200D3F704B;\n Wed,  1 Sep 2021 19:17:41 -0700 (PDT)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-type; s=pfpt0220; bh=tD/uGcBpBxmqiO+M0wobyY3UqvnSv1qCUGZ53ywg8dc=;\n b=AQaxBhiEqh9Wr/HpH/SwTSNiZObDt58HoyiQdrfE8VGgZ4kPzW5uwsvKoOaLqcxraIlB\n 7k0JIFlewvFiIvFpwEEyFQ37IbLrplpV9j3nxmMpwmYJby5zN7ANydg8W+TrhqCl6xIi\n juEr/U4IwOP6OyEBW5C2YqPl4bhZF+LkTJC6taXWDY2nfOVOQO7c6VqO2Ft/KHLTt/Cd\n kpZnfX2z7I/8mHU/vS4qLd17ft+azE8YzbjbobMi1MUp0QS+RQH64JIoQ/1obl+/3gt4\n 8polimz8EZmlGPlKG8J9SOX1OC1vVaxsk59x4lZc5Helw29AfVx6BIGmfLZ82lnkrS5d uw==",
        "From": "Nithin Dabilpuram <ndabilpuram@marvell.com>",
        "To": "Pavan Nikhilesh <pbhagavatula@marvell.com>, Shijith Thotton\n <sthotton@marvell.com>, Nithin Dabilpuram <ndabilpuram@marvell.com>, \"Kiran\n Kumar K\" <kirankumark@marvell.com>, Sunil Kumar Kori <skori@marvell.com>,\n Satha Rao <skoteshwar@marvell.com>",
        "CC": "<jerinj@marvell.com>, <schalla@marvell.com>, <dev@dpdk.org>",
        "Date": "Thu, 2 Sep 2021 07:44:58 +0530",
        "Message-ID": "<20210902021505.17607-21-ndabilpuram@marvell.com>",
        "X-Mailer": "git-send-email 2.8.4",
        "In-Reply-To": "<20210902021505.17607-1-ndabilpuram@marvell.com>",
        "References": "<20210902021505.17607-1-ndabilpuram@marvell.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-Proofpoint-ORIG-GUID": "sNqtDjLX1Rk_PszcTa2JhPPqa_gGWh7G",
        "X-Proofpoint-GUID": "sNqtDjLX1Rk_PszcTa2JhPPqa_gGWh7G",
        "X-Proofpoint-Virus-Version": "vendor=baseguard\n engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475\n definitions=2021-09-01_05,2021-09-01_01,2020-04-07_01",
        "Subject": "[dpdk-dev] [PATCH 20/27] net/cnxk: add cn10k Tx support for\n security offload",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Add support to create and submit CPT instructions on Tx.\n\nSigned-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>\n---\n doc/guides/rel_notes/release_21_11.rst       |   5 +\n drivers/event/cnxk/cn10k_eventdev.c          |  15 +-\n drivers/event/cnxk/cn10k_worker.h            |  74 +-\n drivers/event/cnxk/cn10k_worker_tx_enq.c     |   2 +-\n drivers/event/cnxk/cn10k_worker_tx_enq_seg.c |   2 +-\n drivers/net/cnxk/cn10k_tx.c                  |  31 +-\n drivers/net/cnxk/cn10k_tx.h                  | 981 +++++++++++++++++++++++----\n drivers/net/cnxk/cn10k_tx_mseg.c             |   2 +-\n drivers/net/cnxk/cn10k_tx_vec.c              |   2 +-\n drivers/net/cnxk/cn10k_tx_vec_mseg.c         |   2 +-\n 10 files changed, 934 insertions(+), 182 deletions(-)",
    "diff": "diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst\nindex fb599e5..a87f6cb 100644\n--- a/doc/guides/rel_notes/release_21_11.rst\n+++ b/doc/guides/rel_notes/release_21_11.rst\n@@ -65,6 +65,11 @@ New Features\n \n   * Added event crypto adapter OP_FORWARD mode support.\n \n+* **Added support for Inline IPsec on Marvell CN10K and CN9K.**\n+\n+  * Added support for Inline IPsec in net/cnxk PMD for CN9K event mode\n+    and CN10K poll mode and event mode.\n+\n Removed Items\n -------------\n \ndiff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c\nindex 2f2e7f8..bd1cf55 100644\n--- a/drivers/event/cnxk/cn10k_eventdev.c\n+++ b/drivers/event/cnxk/cn10k_eventdev.c\n@@ -16,7 +16,8 @@\n \t\t\t[!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]\n \n #define CN10K_SET_EVDEV_ENQ_OP(dev, enq_op, enq_ops)                           \\\n-\tenq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \\\n+\tenq_op = enq_ops[!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]     \\\n+\t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]       \\\n \t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]          \\\n \t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]    \\\n \t\t\t[!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]    \\\n@@ -379,17 +380,17 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev)\n \n \t/* Tx modes */\n \tconst event_tx_adapter_enqueue\n-\t\tsso_hws_tx_adptr_enq[2][2][2][2][2][2] = {\n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n-\t[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,\n+\t\tsso_hws_tx_adptr_enq[2][2][2][2][2][2][2] = {\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n+\t[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name,\n \t\t\tNIX_TX_FASTPATH_MODES\n #undef T\n \t\t};\n \n \tconst event_tx_adapter_enqueue\n-\t\tsso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = {\n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                            \\\n-\t[f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,\n+\t\tsso_hws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n+\t[f6][f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name,\n \t\t\tNIX_TX_FASTPATH_MODES\n #undef T\n \t\t};\ndiff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h\nindex b79bd90..1255662 100644\n--- a/drivers/event/cnxk/cn10k_worker.h\n+++ b/drivers/event/cnxk/cn10k_worker.h\n@@ -423,7 +423,11 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,\n \t\t    ((queue[0] ^ queue[1]) & (queue[2] ^ queue[3]))) {\n \n \t\t\tfor (j = 0; j < 4; j++) {\n+\t\t\t\tuint8_t lnum = 0, loff = 0, shft = 0;\n \t\t\t\tstruct rte_mbuf *m = mbufs[i + j];\n+\t\t\t\tuintptr_t laddr;\n+\t\t\t\tuint16_t segdw;\n+\t\t\t\tbool sec;\n \n \t\t\t\ttxq = (struct cn10k_eth_txq *)\n \t\t\t\t\ttxq_data[port[j]][queue[j]];\n@@ -434,19 +438,35 @@ cn10k_sso_vwqe_split_tx(struct rte_mbuf **mbufs, uint16_t nb_mbufs,\n \t\t\t\tif (flags & NIX_TX_OFFLOAD_TSO_F)\n \t\t\t\t\tcn10k_nix_xmit_prepare_tso(m, flags);\n \n-\t\t\t\tcn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags,\n-\t\t\t\t\t\t       txq->lso_tun_fmt);\n+\t\t\t\tcn10k_nix_xmit_prepare(m, cmd, flags,\n+\t\t\t\t\t\t       txq->lso_tun_fmt, &sec);\n+\n+\t\t\t\tladdr = lmt_addr;\n+\t\t\t\t/* Prepare CPT instruction and get nixtx addr if\n+\t\t\t\t * it is for CPT on same lmtline.\n+\t\t\t\t */\n+\t\t\t\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)\n+\t\t\t\t\tcn10k_nix_prep_sec(m, cmd, &laddr,\n+\t\t\t\t\t\t\t   lmt_addr, &lnum,\n+\t\t\t\t\t\t\t   &loff, &shft,\n+\t\t\t\t\t\t\t   txq->sa_base, flags);\n+\n+\t\t\t\t/* Move NIX desc to LMT/NIXTX area */\n+\t\t\t\tcn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);\n+\n \t\t\t\tif (flags & NIX_TX_MULTI_SEG_F) {\n-\t\t\t\t\tconst uint16_t segdw =\n-\t\t\t\t\t\tcn10k_nix_prepare_mseg(\n-\t\t\t\t\t\t\tm, (uint64_t *)lmt_addr,\n-\t\t\t\t\t\t\tflags);\n-\t\t\t\t\tpa = txq->io_addr | ((segdw - 1) << 4);\n+\t\t\t\t\tsegdw = cn10k_nix_prepare_mseg(m,\n+\t\t\t\t\t\t(uint64_t *)laddr, flags);\n \t\t\t\t} else {\n-\t\t\t\t\tpa = txq->io_addr |\n-\t\t\t\t\t     (cn10k_nix_tx_ext_subs(flags) + 1)\n-\t\t\t\t\t\t     << 4;\n+\t\t\t\t\tsegdw = cn10k_nix_tx_ext_subs(flags) +\n+\t\t\t\t\t\t2;\n \t\t\t\t}\n+\n+\t\t\t\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)\n+\t\t\t\t\tpa = txq->cpt_io_addr | 3 << 4;\n+\t\t\t\telse\n+\t\t\t\t\tpa = txq->io_addr | ((segdw - 1) << 4);\n+\n \t\t\t\tif (!sched_type)\n \t\t\t\t\troc_sso_hws_head_wait(base +\n \t\t\t\t\t\t\t      SSOW_LF_GWS_TAG);\n@@ -469,15 +489,19 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,\n \t\t       const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],\n \t\t       const uint32_t flags)\n {\n+\tuint8_t lnum = 0, loff = 0, shft = 0;\n \tstruct cn10k_eth_txq *txq;\n+\tuint16_t ref_cnt, segdw;\n \tstruct rte_mbuf *m;\n \tuintptr_t lmt_addr;\n-\tuint16_t ref_cnt;\n+\tuintptr_t c_laddr;\n \tuint16_t lmt_id;\n \tuintptr_t pa;\n+\tbool sec;\n \n \tlmt_addr = ws->lmt_base;\n \tROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);\n+\tc_laddr = lmt_addr;\n \n \tif (ev->event_type & RTE_EVENT_TYPE_VECTOR) {\n \t\tstruct rte_mbuf **mbufs = ev->vec->mbufs;\n@@ -508,14 +532,28 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,\n \tif (flags & NIX_TX_OFFLOAD_TSO_F)\n \t\tcn10k_nix_xmit_prepare_tso(m, flags);\n \n-\tcn10k_nix_xmit_prepare(m, cmd, lmt_addr, flags, txq->lso_tun_fmt);\n+\tcn10k_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt, &sec);\n+\n+\t/* Prepare CPT instruction and get nixtx addr if\n+\t * it is for CPT on same lmtline.\n+\t */\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)\n+\t\tcn10k_nix_prep_sec(m, cmd, &lmt_addr, c_laddr, &lnum, &loff,\n+\t\t\t\t   &shft, txq->sa_base, flags);\n+\n+\t/* Move NIX desc to LMT/NIXTX area */\n+\tcn10k_nix_xmit_mv_lmt_base(lmt_addr, cmd, flags);\n \tif (flags & NIX_TX_MULTI_SEG_F) {\n-\t\tconst uint16_t segdw =\n-\t\t\tcn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);\n+\t\tsegdw = cn10k_nix_prepare_mseg(m, (uint64_t *)lmt_addr, flags);\n+\t} else {\n+\t\tsegdw = cn10k_nix_tx_ext_subs(flags) + 2;\n+\t}\n+\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)\n+\t\tpa = txq->cpt_io_addr | 3 << 4;\n+\telse\n \t\tpa = txq->io_addr | ((segdw - 1) << 4);\n-\t} else {\n-\t\tpa = txq->io_addr | (cn10k_nix_tx_ext_subs(flags) + 1) << 4;\n-\t}\n+\n \tif (!ev->sched_type)\n \t\troc_sso_hws_head_wait(ws->tx_base + SSOW_LF_GWS_TAG);\n \n@@ -531,7 +569,7 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev,\n \treturn 1;\n }\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n \tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \\\n \t\tvoid *port, struct rte_event ev[], uint16_t nb_events);        \\\n \tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \\\ndiff --git a/drivers/event/cnxk/cn10k_worker_tx_enq.c b/drivers/event/cnxk/cn10k_worker_tx_enq.c\nindex f9968ac..f14c7fc 100644\n--- a/drivers/event/cnxk/cn10k_worker_tx_enq.c\n+++ b/drivers/event/cnxk/cn10k_worker_tx_enq.c\n@@ -4,7 +4,7 @@\n \n #include \"cn10k_worker.h\"\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n \tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name(                  \\\n \t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n \t{                                                                      \\\ndiff --git a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c\nindex a24fc42..2ea61e5 100644\n--- a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c\n+++ b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c\n@@ -4,7 +4,7 @@\n \n #include \"cn10k_worker.h\"\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n \tuint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name(              \\\n \t\tvoid *port, struct rte_event ev[], uint16_t nb_events)         \\\n \t{                                                                      \\\ndiff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c\nindex 0e1276c..eb962ef 100644\n--- a/drivers/net/cnxk/cn10k_tx.c\n+++ b/drivers/net/cnxk/cn10k_tx.c\n@@ -5,7 +5,7 @@\n #include \"cn10k_ethdev.h\"\n #include \"cn10k_tx.h\"\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n \tuint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(\t       \\\n \t\tvoid *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \\\n \t{                                                                      \\\n@@ -24,12 +24,13 @@ NIX_TX_FASTPATH_MODES\n \n static inline void\n pick_tx_func(struct rte_eth_dev *eth_dev,\n-\t     const eth_tx_burst_t tx_burst[2][2][2][2][2][2])\n+\t     const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])\n {\n \tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n \n-\t/* [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */\n+\t/* [SEC] [TSP] [TSO] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */\n \teth_dev->tx_pkt_burst = tx_burst\n+\t\t[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]\n \t\t[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]\n \t\t[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]\n \t\t[!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]\n@@ -43,33 +44,33 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)\n {\n \tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n \n-\tconst eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2] = {\n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n-\t[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,\n+\tconst eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n+\t[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_##name,\n \n \t\tNIX_TX_FASTPATH_MODES\n #undef T\n \t};\n \n-\tconst eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2] = {\n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n-\t[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,\n+\tconst eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n+\t[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_mseg_##name,\n \n \t\tNIX_TX_FASTPATH_MODES\n #undef T\n \t};\n \n-\tconst eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2] = {\n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n-\t[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,\n+\tconst eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n+\t[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_##name,\n \n \t\tNIX_TX_FASTPATH_MODES\n #undef T\n \t};\n \n-\tconst eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2] = {\n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n-\t[f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,\n+\tconst eth_tx_burst_t nix_eth_tx_vec_burst_mseg[2][2][2][2][2][2][2] = {\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n+\t[f6][f5][f4][f3][f2][f1][f0] = cn10k_nix_xmit_pkts_vec_mseg_##name,\n \n \t\tNIX_TX_FASTPATH_MODES\n #undef T\ndiff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h\nindex c81a612..70ba929 100644\n--- a/drivers/net/cnxk/cn10k_tx.h\n+++ b/drivers/net/cnxk/cn10k_tx.h\n@@ -6,6 +6,8 @@\n \n #include <rte_vect.h>\n \n+#include <rte_eventdev.h>\n+\n #define NIX_TX_OFFLOAD_NONE\t      (0)\n #define NIX_TX_OFFLOAD_L3_L4_CSUM_F   BIT(0)\n #define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)\n@@ -57,12 +59,22 @@\n static __rte_always_inline int\n cn10k_nix_tx_ext_subs(const uint16_t flags)\n {\n-\treturn (flags & NIX_TX_OFFLOAD_TSTAMP_F)\n-\t\t       ? 2\n-\t\t       : ((flags &\n-\t\t\t   (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F))\n-\t\t\t\t  ? 1\n-\t\t\t\t  : 0);\n+\treturn (flags & NIX_TX_OFFLOAD_TSTAMP_F) ?\n+\t\t\t     2 :\n+\t\t\t     ((flags &\n+\t\t\t (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?\n+\t\t\t\t      1 :\n+\t\t\t\t      0);\n+}\n+\n+static __rte_always_inline uint8_t\n+cn10k_nix_tx_dwords(const uint16_t flags, const uint8_t segdw)\n+{\n+\tif (!(flags & NIX_TX_MULTI_SEG_F))\n+\t\treturn cn10k_nix_tx_ext_subs(flags) + 2;\n+\n+\t/* Already everything is accounted for in segdw */\n+\treturn segdw;\n }\n \n static __rte_always_inline uint8_t\n@@ -144,6 +156,34 @@ cn10k_nix_tx_steor_vec_data(const uint16_t flags)\n \treturn data;\n }\n \n+static __rte_always_inline uint64_t\n+cn10k_cpt_tx_steor_data(void)\n+{\n+\t/* We have two CPT instructions per LMTLine */\n+\tconst uint64_t dw_m1 = ROC_CN10K_TWO_CPT_INST_DW_M1;\n+\tuint64_t data;\n+\n+\t/* This will be moved to addr area */\n+\tdata = dw_m1 << 16;\n+\tdata |= dw_m1 << 19;\n+\tdata |= dw_m1 << 22;\n+\tdata |= dw_m1 << 25;\n+\tdata |= dw_m1 << 28;\n+\tdata |= dw_m1 << 31;\n+\tdata |= dw_m1 << 34;\n+\tdata |= dw_m1 << 37;\n+\tdata |= dw_m1 << 40;\n+\tdata |= dw_m1 << 43;\n+\tdata |= dw_m1 << 46;\n+\tdata |= dw_m1 << 49;\n+\tdata |= dw_m1 << 52;\n+\tdata |= dw_m1 << 55;\n+\tdata |= dw_m1 << 58;\n+\tdata |= dw_m1 << 61;\n+\n+\treturn data;\n+}\n+\n static __rte_always_inline void\n cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,\n \t\t      const uint16_t flags)\n@@ -165,6 +205,236 @@ cn10k_nix_tx_skeleton(const struct cn10k_eth_txq *txq, uint64_t *cmd,\n }\n \n static __rte_always_inline void\n+cn10k_nix_sec_steorl(uintptr_t io_addr, uint32_t lmt_id, uint8_t lnum,\n+\t\t     uint8_t loff, uint8_t shft)\n+{\n+\tuint64_t data;\n+\tuintptr_t pa;\n+\n+\t/* Check if there is any CPT instruction to submit */\n+\tif (!lnum && !loff)\n+\t\treturn;\n+\n+\tdata = cn10k_cpt_tx_steor_data();\n+\t/* Update lmtline use for partial end line */\n+\tif (loff) {\n+\t\tdata &= ~(0x7ULL << shft);\n+\t\t/* Update it to half full i.e 64B */\n+\t\tdata |= (0x3UL << shft);\n+\t}\n+\n+\tpa = io_addr | ((data >> 16) & 0x7) << 4;\n+\tdata &= ~(0x7ULL << 16);\n+\t/* Update lines - 1 that contain valid data */\n+\tdata |= ((uint64_t)(lnum + loff - 1)) << 12;\n+\tdata |= lmt_id;\n+\n+\t/* STEOR */\n+\troc_lmt_submit_steorl(data, pa);\n+}\n+\n+#if defined(RTE_ARCH_ARM64)\n+static __rte_always_inline void\n+cn10k_nix_prep_sec_vec(struct rte_mbuf *m, uint64x2_t *cmd0, uint64x2_t *cmd1,\n+\t\t       uintptr_t *nixtx_addr, uintptr_t lbase, uint8_t *lnum,\n+\t\t       uint8_t *loff, uint8_t *shft, uint64_t sa_base,\n+\t\t       const uint16_t flags)\n+{\n+\tstruct cn10k_sec_sess_priv sess_priv;\n+\tuint32_t pkt_len, dlen_adj, rlen;\n+\tuint64x2_t cmd01, cmd23;\n+\tuintptr_t dptr, nixtx;\n+\tuint64_t ucode_cmd[4];\n+\tuint64_t *laddr;\n+\tuint8_t l2_len;\n+\tuint16_t tag;\n+\tuint64_t sa;\n+\n+\tsess_priv.u64 = *rte_security_dynfield(m);\n+\n+\tif (flags & NIX_TX_NEED_SEND_HDR_W1)\n+\t\tl2_len = vgetq_lane_u8(*cmd0, 8);\n+\telse\n+\t\tl2_len = m->l2_len;\n+\n+\t/* Retrieve DPTR */\n+\tdptr = vgetq_lane_u64(*cmd1, 1);\n+\tpkt_len = vgetq_lane_u16(*cmd0, 0);\n+\n+\t/* Calculate dlen adj */\n+\tdlen_adj = pkt_len - l2_len;\n+\trlen = (dlen_adj + sess_priv.roundup_len) +\n+\t       (sess_priv.roundup_byte - 1);\n+\trlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);\n+\trlen += sess_priv.partial_len;\n+\tdlen_adj = rlen - dlen_adj;\n+\n+\t/* Update send descriptors. Security is single segment only */\n+\t*cmd0 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd0, 0);\n+\t*cmd1 = vsetq_lane_u16(pkt_len + dlen_adj, *cmd1, 0);\n+\n+\t/* Get area where NIX descriptor needs to be stored */\n+\tnixtx = dptr + pkt_len + dlen_adj;\n+\tnixtx += BIT_ULL(7);\n+\tnixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);\n+\n+\t/* Return nixtx addr */\n+\t*nixtx_addr = (nixtx + 16);\n+\n+\t/* DLEN passed is excluding L2HDR */\n+\tpkt_len -= l2_len;\n+\ttag = sa_base & 0xFFFFUL;\n+\tsa_base &= ~0xFFFFUL;\n+\tsa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);\n+\tucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);\n+\tucode_cmd[0] =\n+\t\t(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);\n+\n+\t/* CPT Word 0 and Word 1 */\n+\tcmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));\n+\t/* CPT_RES_S is 16B above NIXTX */\n+\tcmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);\n+\n+\t/* CPT word 2 and 3 */\n+\tcmd23 = vdupq_n_u64(0);\n+\tcmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |\n+\t\t\t\tCNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);\n+\tcmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);\n+\n+\tdptr += l2_len;\n+\tucode_cmd[1] = dptr;\n+\tucode_cmd[2] = dptr;\n+\n+\t/* Move to our line */\n+\tladdr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);\n+\n+\t/* Write CPT instruction to lmt line */\n+\tvst1q_u64(laddr, cmd01);\n+\tvst1q_u64((laddr + 2), cmd23);\n+\n+\t*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;\n+\t*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);\n+\n+\t/* Move to next line for every other CPT inst */\n+\t*loff = !(*loff);\n+\t*lnum = *lnum + (*loff ? 0 : 1);\n+\t*shft = *shft + (*loff ? 0 : 3);\n+}\n+\n+static __rte_always_inline void\n+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,\n+\t\t   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,\n+\t\t   uint64_t sa_base, const uint16_t flags)\n+{\n+\tstruct cn10k_sec_sess_priv sess_priv;\n+\tuint32_t pkt_len, dlen_adj, rlen;\n+\tstruct nix_send_hdr_s *send_hdr;\n+\tuint64x2_t cmd01, cmd23;\n+\tunion nix_send_sg_s *sg;\n+\tuintptr_t dptr, nixtx;\n+\tuint64_t ucode_cmd[4];\n+\tuint64_t *laddr;\n+\tuint8_t l2_len;\n+\tuint16_t tag;\n+\tuint64_t sa;\n+\n+\t/* Move to our line from base */\n+\tsess_priv.u64 = *rte_security_dynfield(m);\n+\tsend_hdr = (struct nix_send_hdr_s *)cmd;\n+\tif (flags & NIX_TX_NEED_EXT_HDR)\n+\t\tsg = (union nix_send_sg_s *)&cmd[4];\n+\telse\n+\t\tsg = (union nix_send_sg_s *)&cmd[2];\n+\n+\tif (flags & NIX_TX_NEED_SEND_HDR_W1)\n+\t\tl2_len = cmd[1] & 0xFF;\n+\telse\n+\t\tl2_len = m->l2_len;\n+\n+\t/* Retrieve DPTR */\n+\tdptr = *(uint64_t *)(sg + 1);\n+\tpkt_len = send_hdr->w0.total;\n+\n+\t/* Calculate dlen adj */\n+\tdlen_adj = pkt_len - l2_len;\n+\trlen = (dlen_adj + sess_priv.roundup_len) +\n+\t       (sess_priv.roundup_byte - 1);\n+\trlen &= ~(uint64_t)(sess_priv.roundup_byte - 1);\n+\trlen += sess_priv.partial_len;\n+\tdlen_adj = rlen - dlen_adj;\n+\n+\t/* Update send descriptors. Security is single segment only */\n+\tsend_hdr->w0.total = pkt_len + dlen_adj;\n+\tsg->seg1_size = pkt_len + dlen_adj;\n+\n+\t/* Get area where NIX descriptor needs to be stored */\n+\tnixtx = dptr + pkt_len + dlen_adj;\n+\tnixtx += BIT_ULL(7);\n+\tnixtx = (nixtx - 1) & ~(BIT_ULL(7) - 1);\n+\n+\t/* Return nixtx addr */\n+\t*nixtx_addr = (nixtx + 16);\n+\n+\t/* DLEN passed is excluding L2HDR */\n+\tpkt_len -= l2_len;\n+\ttag = sa_base & 0xFFFFUL;\n+\tsa_base &= ~0xFFFFUL;\n+\tsa = (uintptr_t)roc_nix_inl_ot_ipsec_outb_sa(sa_base, sess_priv.sa_idx);\n+\tucode_cmd[3] = (ROC_CPT_DFLT_ENG_GRP_SE_IE << 61 | 1UL << 60 | sa);\n+\tucode_cmd[0] =\n+\t\t(ROC_IE_OT_MAJOR_OP_PROCESS_OUTBOUND_IPSEC << 48 | pkt_len);\n+\n+\t/* CPT Word 0 and Word 1. Assume no multi-seg support */\n+\tcmd01 = vdupq_n_u64((nixtx + 16) | (cn10k_nix_tx_ext_subs(flags) + 1));\n+\t/* CPT_RES_S is 16B above NIXTX */\n+\tcmd01 = vsetq_lane_u8(nixtx & BIT_ULL(7), cmd01, 8);\n+\n+\t/* CPT word 2 and 3 */\n+\tcmd23 = vdupq_n_u64(0);\n+\tcmd23 = vsetq_lane_u64((((uint64_t)RTE_EVENT_TYPE_CPU << 28) | tag |\n+\t\t\t\tCNXK_ETHDEV_SEC_OUTB_EV_SUB << 20), cmd23, 0);\n+\tcmd23 = vsetq_lane_u64((uintptr_t)m | 1, cmd23, 1);\n+\n+\tdptr += l2_len;\n+\tucode_cmd[1] = dptr;\n+\tucode_cmd[2] = dptr;\n+\n+\t/* Move to our line */\n+\tladdr = LMT_OFF(lbase, *lnum, *loff ? 64 : 0);\n+\n+\t/* Write CPT instruction to lmt line */\n+\tvst1q_u64(laddr, cmd01);\n+\tvst1q_u64((laddr + 2), cmd23);\n+\n+\t*(__uint128_t *)(laddr + 4) = *(__uint128_t *)ucode_cmd;\n+\t*(__uint128_t *)(laddr + 6) = *(__uint128_t *)(ucode_cmd + 2);\n+\n+\t/* Move to next line for every other CPT inst */\n+\t*loff = !(*loff);\n+\t*lnum = *lnum + (*loff ? 0 : 1);\n+\t*shft = *shft + (*loff ? 0 : 3);\n+}\n+\n+#else\n+\n+static __rte_always_inline void\n+cn10k_nix_prep_sec(struct rte_mbuf *m, uint64_t *cmd, uintptr_t *nixtx_addr,\n+\t\t   uintptr_t lbase, uint8_t *lnum, uint8_t *loff, uint8_t *shft,\n+\t\t   uint64_t sa_base, const uint16_t flags)\n+{\n+\tRTE_SET_USED(m);\n+\tRTE_SET_USED(cmd);\n+\tRTE_SET_USED(nixtx_addr);\n+\tRTE_SET_USED(lbase);\n+\tRTE_SET_USED(lnum);\n+\tRTE_SET_USED(loff);\n+\tRTE_SET_USED(shft);\n+\tRTE_SET_USED(sa_base);\n+\tRTE_SET_USED(flags);\n+}\n+#endif\n+\n+static __rte_always_inline void\n cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)\n {\n \tuint64_t mask, ol_flags = m->ol_flags;\n@@ -217,8 +487,8 @@ cn10k_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)\n }\n \n static __rte_always_inline void\n-cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,\n-\t\t       const uint16_t flags, const uint64_t lso_tun_fmt)\n+cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,\n+\t\t       const uint64_t lso_tun_fmt, bool *sec)\n {\n \tstruct nix_send_ext_s *send_hdr_ext;\n \tstruct nix_send_hdr_s *send_hdr;\n@@ -237,16 +507,16 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,\n \t\tsg = (union nix_send_sg_s *)(cmd + 2);\n \t}\n \n-\tif (flags & NIX_TX_NEED_SEND_HDR_W1) {\n+\tif (flags & (NIX_TX_NEED_SEND_HDR_W1 | NIX_TX_OFFLOAD_SECURITY_F)) {\n \t\tol_flags = m->ol_flags;\n \t\tw1.u = 0;\n \t}\n \n-\tif (!(flags & NIX_TX_MULTI_SEG_F)) {\n+\tif (!(flags & NIX_TX_MULTI_SEG_F))\n \t\tsend_hdr->w0.total = m->data_len;\n-\t\tsend_hdr->w0.aura =\n-\t\t\troc_npa_aura_handle_to_aura(m->pool->pool_id);\n-\t}\n+\telse\n+\t\tsend_hdr->w0.total = m->pkt_len;\n+\tsend_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);\n \n \t/*\n \t * L3type:  2 => IPV4\n@@ -376,7 +646,7 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,\n \t\tsend_hdr->w1.u = w1.u;\n \n \tif (!(flags & NIX_TX_MULTI_SEG_F)) {\n-\t\tsg->seg1_size = m->data_len;\n+\t\tsg->seg1_size = send_hdr->w0.total;\n \t\t*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);\n \n \t\tif (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {\n@@ -389,17 +659,38 @@ cn10k_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, uintptr_t lmt_addr,\n \t\t/* Mark mempool object as \"put\" since it is freed by NIX */\n \t\tif (!send_hdr->w0.df)\n \t\t\t__mempool_check_cookies(m->pool, (void **)&m, 1, 0);\n+\t} else {\n+\t\tsg->seg1_size = m->data_len;\n+\t\t*(rte_iova_t *)(sg + 1) = rte_mbuf_data_iova(m);\n+\n+\t\t/* NOFF is handled later for multi-seg */\n \t}\n \n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F)\n+\t\t*sec = !!(ol_flags & PKT_TX_SEC_OFFLOAD);\n+}\n+\n+static __rte_always_inline void\n+cn10k_nix_xmit_mv_lmt_base(uintptr_t lmt_addr, uint64_t *cmd,\n+\t\t\t   const uint16_t flags)\n+{\n+\tstruct nix_send_ext_s *send_hdr_ext;\n+\tunion nix_send_sg_s *sg;\n+\n \t/* With minimal offloads, 'cmd' being local could be optimized out to\n \t * registers. In other cases, 'cmd' will be in stack. Intent is\n \t * 'cmd' stores content from txq->cmd which is copied only once.\n \t */\n-\t*((struct nix_send_hdr_s *)lmt_addr) = *send_hdr;\n+\t*((struct nix_send_hdr_s *)lmt_addr) = *(struct nix_send_hdr_s *)cmd;\n \tlmt_addr += 16;\n \tif (flags & NIX_TX_NEED_EXT_HDR) {\n+\t\tsend_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);\n \t\t*((struct nix_send_ext_s *)lmt_addr) = *send_hdr_ext;\n \t\tlmt_addr += 16;\n+\n+\t\tsg = (union nix_send_sg_s *)(cmd + 4);\n+\t} else {\n+\t\tsg = (union nix_send_sg_s *)(cmd + 2);\n \t}\n \t/* In case of multi-seg, sg template is stored here */\n \t*((union nix_send_sg_s *)lmt_addr) = *sg;\n@@ -414,7 +705,7 @@ cn10k_nix_xmit_prepare_tstamp(uintptr_t lmt_addr, const uint64_t *cmd,\n \tif (flags & NIX_TX_OFFLOAD_TSTAMP_F) {\n \t\tconst uint8_t is_ol_tstamp = !(ol_flags & PKT_TX_IEEE1588_TMST);\n \t\tstruct nix_send_ext_s *send_hdr_ext =\n-\t\t\t\t\t(struct nix_send_ext_s *)lmt_addr + 16;\n+\t\t\t(struct nix_send_ext_s *)lmt_addr + 16;\n \t\tuint64_t *lmt = (uint64_t *)lmt_addr;\n \t\tuint16_t off = (no_segdw - 1) << 1;\n \t\tstruct nix_send_mem_s *send_mem;\n@@ -457,8 +748,6 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)\n \tuint8_t off, i;\n \n \tsend_hdr = (struct nix_send_hdr_s *)cmd;\n-\tsend_hdr->w0.total = m->pkt_len;\n-\tsend_hdr->w0.aura = roc_npa_aura_handle_to_aura(m->pool->pool_id);\n \n \tif (flags & NIX_TX_NEED_EXT_HDR)\n \t\toff = 2;\n@@ -466,13 +755,27 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)\n \t\toff = 0;\n \n \tsg = (union nix_send_sg_s *)&cmd[2 + off];\n-\t/* Clear sg->u header before use */\n-\tsg->u &= 0xFC00000000000000;\n+\n+\t/* Start from second segment, first segment is already there */\n+\ti = 1;\n \tsg_u = sg->u;\n-\tslist = &cmd[3 + off];\n+\tnb_segs = m->nb_segs - 1;\n+\tm_next = m->next;\n+\tslist = &cmd[3 + off + 1];\n \n-\ti = 0;\n-\tnb_segs = m->nb_segs;\n+\t/* Set invert df if buffer is not to be freed by H/W */\n+\tif (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)\n+\t\tsg_u |= (cnxk_nix_prefree_seg(m) << 55);\n+\n+\t\t/* Mark mempool object as \"put\" since it is freed by NIX */\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\tif (!(sg_u & (1ULL << 55)))\n+\t\t__mempool_check_cookies(m->pool, (void **)&m, 1, 0);\n+\trte_io_wmb();\n+#endif\n+\tm = m_next;\n+\tif (!m)\n+\t\tgoto done;\n \n \t/* Fill mbuf segments */\n \tdo {\n@@ -504,6 +807,7 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)\n \t\tm = m_next;\n \t} while (nb_segs);\n \n+done:\n \tsg->u = sg_u;\n \tsg->segs = i;\n \tsegdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];\n@@ -522,10 +826,17 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,\n {\n \tstruct cn10k_eth_txq *txq = tx_queue;\n \tconst rte_iova_t io_addr = txq->io_addr;\n-\tuintptr_t pa, lmt_addr = txq->lmt_base;\n+\tuint8_t lnum, c_lnum, c_shft, c_loff;\n+\tuintptr_t pa, lbase = txq->lmt_base;\n \tuint16_t lmt_id, burst, left, i;\n+\tuintptr_t c_lbase = lbase;\n+\trte_iova_t c_io_addr;\n \tuint64_t lso_tun_fmt;\n+\tuint16_t c_lmt_id;\n+\tuint64_t sa_base;\n+\tuintptr_t laddr;\n \tuint64_t data;\n+\tbool sec;\n \n \tif (!(flags & NIX_TX_VWQE_F)) {\n \t\tNIX_XMIT_FC_OR_RETURN(txq, pkts);\n@@ -540,10 +851,24 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,\n \t\tlso_tun_fmt = txq->lso_tun_fmt;\n \n \t/* Get LMT base address and LMT ID as lcore id */\n-\tROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);\n+\tROC_LMT_BASE_ID_GET(lbase, lmt_id);\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);\n+\t\tc_io_addr = txq->cpt_io_addr;\n+\t\tsa_base = txq->sa_base;\n+\t}\n+\n \tleft = pkts;\n again:\n \tburst = left > 32 ? 32 : left;\n+\n+\tlnum = 0;\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tc_lnum = 0;\n+\t\tc_loff = 0;\n+\t\tc_shft = 16;\n+\t}\n+\n \tfor (i = 0; i < burst; i++) {\n \t\t/* Perform header writes for TSO, barrier at\n \t\t * lmt steorl will suffice.\n@@ -551,16 +876,39 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,\n \t\tif (flags & NIX_TX_OFFLOAD_TSO_F)\n \t\t\tcn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);\n \n-\t\tcn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,\n-\t\t\t\t       lso_tun_fmt);\n-\t\tcn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],\n+\t\tcn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,\n+\t\t\t\t       &sec);\n+\n+\t\tladdr = (uintptr_t)LMT_OFF(lbase, lnum, 0);\n+\n+\t\t/* Prepare CPT instruction and get nixtx addr */\n+\t\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)\n+\t\t\tcn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,\n+\t\t\t\t\t   &c_lnum, &c_loff, &c_shft, sa_base,\n+\t\t\t\t\t   flags);\n+\n+\t\t/* Move NIX desc to LMT/NIXTX area */\n+\t\tcn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);\n+\t\tcn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],\n \t\t\t\t\t      tx_pkts[i]->ol_flags, 4, flags);\n-\t\tlmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);\n+\t\tif (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec)\n+\t\t\tlnum++;\n \t}\n \n \tif (flags & NIX_TX_VWQE_F)\n \t\troc_sso_hws_head_wait(base);\n \n+\tleft -= burst;\n+\ttx_pkts += burst;\n+\n+\t/* Submit CPT instructions if any */\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\t/* Reduce pkts to be sent to CPT */\n+\t\tburst -= ((c_lnum << 1) + c_loff);\n+\t\tcn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,\n+\t\t\t\t     c_shft);\n+\t}\n+\n \t/* Trigger LMTST */\n \tif (burst > 16) {\n \t\tdata = cn10k_nix_tx_steor_data(flags);\n@@ -591,16 +939,9 @@ cn10k_nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts,\n \t\troc_lmt_submit_steorl(data, pa);\n \t}\n \n-\tleft -= burst;\n \trte_io_wmb();\n-\tif (left) {\n-\t\t/* Start processing another burst */\n-\t\ttx_pkts += burst;\n-\t\t/* Reset lmt base addr */\n-\t\tlmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);\n-\t\tlmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));\n+\tif (left)\n \t\tgoto again;\n-\t}\n \n \treturn pkts;\n }\n@@ -611,13 +952,20 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\t const uint16_t flags)\n {\n \tstruct cn10k_eth_txq *txq = tx_queue;\n-\tuintptr_t pa0, pa1, lmt_addr = txq->lmt_base;\n+\tuintptr_t pa0, pa1, lbase = txq->lmt_base;\n \tconst rte_iova_t io_addr = txq->io_addr;\n \tuint16_t segdw, lmt_id, burst, left, i;\n+\tuint8_t lnum, c_lnum, c_loff;\n+\tuintptr_t c_lbase = lbase;\n \tuint64_t data0, data1;\n+\trte_iova_t c_io_addr;\n \tuint64_t lso_tun_fmt;\n+\tuint8_t shft, c_shft;\n \t__uint128_t data128;\n-\tuint16_t shft;\n+\tuint16_t c_lmt_id;\n+\tuint64_t sa_base;\n+\tuintptr_t laddr;\n+\tbool sec;\n \n \tNIX_XMIT_FC_OR_RETURN(txq, pkts);\n \n@@ -630,12 +978,26 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\tlso_tun_fmt = txq->lso_tun_fmt;\n \n \t/* Get LMT base address and LMT ID as lcore id */\n-\tROC_LMT_BASE_ID_GET(lmt_addr, lmt_id);\n+\tROC_LMT_BASE_ID_GET(lbase, lmt_id);\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tROC_LMT_CPT_BASE_ID_GET(c_lbase, c_lmt_id);\n+\t\tc_io_addr = txq->cpt_io_addr;\n+\t\tsa_base = txq->sa_base;\n+\t}\n+\n \tleft = pkts;\n again:\n \tburst = left > 32 ? 32 : left;\n \tshft = 16;\n \tdata128 = 0;\n+\n+\tlnum = 0;\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tc_lnum = 0;\n+\t\tc_loff = 0;\n+\t\tc_shft = 16;\n+\t}\n+\n \tfor (i = 0; i < burst; i++) {\n \t\t/* Perform header writes for TSO, barrier at\n \t\t * lmt steorl will suffice.\n@@ -643,22 +1005,47 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\tif (flags & NIX_TX_OFFLOAD_TSO_F)\n \t\t\tcn10k_nix_xmit_prepare_tso(tx_pkts[i], flags);\n \n-\t\tcn10k_nix_xmit_prepare(tx_pkts[i], cmd, lmt_addr, flags,\n-\t\t\t\t       lso_tun_fmt);\n+\t\tcn10k_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt,\n+\t\t\t\t       &sec);\n+\n+\t\tladdr = (uintptr_t)LMT_OFF(lbase, lnum, 0);\n+\n+\t\t/* Prepare CPT instruction and get nixtx addr */\n+\t\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && sec)\n+\t\t\tcn10k_nix_prep_sec(tx_pkts[i], cmd, &laddr, c_lbase,\n+\t\t\t\t\t   &c_lnum, &c_loff, &c_shft, sa_base,\n+\t\t\t\t\t   flags);\n+\n+\t\t/* Move NIX desc to LMT/NIXTX area */\n+\t\tcn10k_nix_xmit_mv_lmt_base(laddr, cmd, flags);\n+\n \t\t/* Store sg list directly on lmt line */\n-\t\tsegdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)lmt_addr,\n+\t\tsegdw = cn10k_nix_prepare_mseg(tx_pkts[i], (uint64_t *)laddr,\n \t\t\t\t\t       flags);\n-\t\tcn10k_nix_xmit_prepare_tstamp(lmt_addr, &txq->cmd[0],\n+\t\tcn10k_nix_xmit_prepare_tstamp(laddr, &txq->cmd[0],\n \t\t\t\t\t      tx_pkts[i]->ol_flags, segdw,\n \t\t\t\t\t      flags);\n-\t\tlmt_addr += (1ULL << ROC_LMT_LINE_SIZE_LOG2);\n-\t\tdata128 |= (((__uint128_t)(segdw - 1)) << shft);\n-\t\tshft += 3;\n+\t\tif (!(flags & NIX_TX_OFFLOAD_SECURITY_F) || !sec) {\n+\t\t\tlnum++;\n+\t\t\tdata128 |= (((__uint128_t)(segdw - 1)) << shft);\n+\t\t\tshft += 3;\n+\t\t}\n \t}\n \n \tif (flags & NIX_TX_VWQE_F)\n \t\troc_sso_hws_head_wait(base);\n \n+\tleft -= burst;\n+\ttx_pkts += burst;\n+\n+\t/* Submit CPT instructions if any */\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\t/* Reduce pkts to be sent to CPT */\n+\t\tburst -= ((c_lnum << 1) + c_loff);\n+\t\tcn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,\n+\t\t\t\t     c_shft);\n+\t}\n+\n \tdata0 = (uint64_t)data128;\n \tdata1 = (uint64_t)(data128 >> 64);\n \t/* Make data0 similar to data1 */\n@@ -695,16 +1082,9 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\troc_lmt_submit_steorl(data0, pa0);\n \t}\n \n-\tleft -= burst;\n \trte_io_wmb();\n-\tif (left) {\n-\t\t/* Start processing another burst */\n-\t\ttx_pkts += burst;\n-\t\t/* Reset lmt base addr */\n-\t\tlmt_addr -= (1ULL << ROC_LMT_LINE_SIZE_LOG2);\n-\t\tlmt_addr &= (~(BIT_ULL(ROC_LMT_BASE_PER_CORE_LOG2) - 1));\n+\tif (left)\n \t\tgoto again;\n-\t}\n \n \treturn pkts;\n }\n@@ -989,6 +1369,90 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0,\n \treturn lmt_used;\n }\n \n+static __rte_always_inline void\n+cn10k_nix_lmt_next(uint8_t dw, uintptr_t laddr, uint8_t *lnum, uint8_t *loff,\n+\t\t   uint8_t *shift, __uint128_t *data128, uintptr_t *next)\n+{\n+\t/* Go to next line if we are out of space */\n+\tif ((*loff + (dw << 4)) > 128) {\n+\t\t*data128 = *data128 |\n+\t\t\t   (((__uint128_t)((*loff >> 4) - 1)) << *shift);\n+\t\t*shift = *shift + 3;\n+\t\t*loff = 0;\n+\t\t*lnum = *lnum + 1;\n+\t}\n+\n+\t*next = (uintptr_t)LMT_OFF(laddr, *lnum, *loff);\n+\t*loff = *loff + (dw << 4);\n+}\n+\n+static __rte_always_inline void\n+cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr,\n+\t\t     uint64x2_t cmd0, uint64x2_t cmd1, uint64x2_t cmd2,\n+\t\t     uint64x2_t cmd3, const uint16_t flags)\n+{\n+\tuint8_t off;\n+\n+\t/* Handle no fast free when security is enabled without mseg */\n+\tif ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&\n+\t    (flags & NIX_TX_OFFLOAD_SECURITY_F) &&\n+\t    !(flags & NIX_TX_MULTI_SEG_F)) {\n+\t\tunion nix_send_sg_s sg;\n+\n+\t\tsg.u = vgetq_lane_u64(cmd1, 0);\n+\t\tsg.u |= (cnxk_nix_prefree_seg(mbuf) << 55);\n+\t\tcmd1 = vsetq_lane_u64(sg.u, cmd1, 0);\n+\n+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG\n+\t\tsg.u = vgetq_lane_u64(cmd1, 0);\n+\t\tif (!(sg.u & (1ULL << 55)))\n+\t\t\t__mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1,\n+\t\t\t\t\t\t0);\n+\t\trte_io_wmb();\n+#endif\n+\t}\n+\tif (flags & NIX_TX_MULTI_SEG_F) {\n+\t\tif ((flags & NIX_TX_NEED_EXT_HDR) &&\n+\t\t    (flags & NIX_TX_OFFLOAD_TSTAMP_F)) {\n+\t\t\tcn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),\n+\t\t\t\t\t\t   &cmd0, &cmd1, segdw, flags);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);\n+\t\t\toff = segdw - 4;\n+\t\t\toff <<= 4;\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 48 + off), cmd3);\n+\t\t} else if (flags & NIX_TX_NEED_EXT_HDR) {\n+\t\t\tcn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 48),\n+\t\t\t\t\t\t   &cmd0, &cmd1, segdw, flags);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);\n+\t\t} else {\n+\t\t\tcn10k_nix_prepare_mseg_vec(mbuf, LMT_OFF(laddr, 0, 32),\n+\t\t\t\t\t\t   &cmd0, &cmd1, segdw, flags);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);\n+\t\t}\n+\t} else if (flags & NIX_TX_NEED_EXT_HDR) {\n+\t\t/* Store the prepared send desc to LMT lines */\n+\t\tif (flags & NIX_TX_OFFLOAD_TSTAMP_F) {\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 48), cmd3);\n+\t\t} else {\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 16), cmd2);\n+\t\t\tvst1q_u64(LMT_OFF(laddr, 0, 32), cmd1);\n+\t\t}\n+\t} else {\n+\t\t/* Store the prepared send desc to LMT lines */\n+\t\tvst1q_u64(LMT_OFF(laddr, 0, 0), cmd0);\n+\t\tvst1q_u64(LMT_OFF(laddr, 0, 16), cmd1);\n+\t}\n+}\n+\n static __rte_always_inline uint16_t\n cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\t   uint16_t pkts, uint64_t *cmd, uintptr_t base,\n@@ -998,10 +1462,10 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \tuint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;\n \tuint64x2_t cmd0[NIX_DESCS_PER_LOOP], cmd1[NIX_DESCS_PER_LOOP],\n \t\tcmd2[NIX_DESCS_PER_LOOP], cmd3[NIX_DESCS_PER_LOOP];\n+\tuint16_t left, scalar, burst, i, lmt_id, c_lmt_id;\n \tuint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3, pa;\n \tuint64x2_t senddesc01_w0, senddesc23_w0;\n \tuint64x2_t senddesc01_w1, senddesc23_w1;\n-\tuint16_t left, scalar, burst, i, lmt_id;\n \tuint64x2_t sendext01_w0, sendext23_w0;\n \tuint64x2_t sendext01_w1, sendext23_w1;\n \tuint64x2_t sendmem01_w0, sendmem23_w0;\n@@ -1010,12 +1474,16 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \tuint64x2_t sgdesc01_w0, sgdesc23_w0;\n \tuint64x2_t sgdesc01_w1, sgdesc23_w1;\n \tstruct cn10k_eth_txq *txq = tx_queue;\n-\tuintptr_t laddr = txq->lmt_base;\n \trte_iova_t io_addr = txq->io_addr;\n+\tuintptr_t laddr = txq->lmt_base;\n+\tuint8_t c_lnum, c_shft, c_loff;\n \tuint64x2_t ltypes01, ltypes23;\n \tuint64x2_t xtmp128, ytmp128;\n \tuint64x2_t xmask01, xmask23;\n-\tuint8_t lnum, shift;\n+\tuintptr_t c_laddr = laddr;\n+\tuint8_t lnum, shift, loff;\n+\trte_iova_t c_io_addr;\n+\tuint64_t sa_base;\n \tunion wdata {\n \t\t__uint128_t data128;\n \t\tuint64_t data[2];\n@@ -1061,19 +1529,36 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \n \t/* Get LMT base address and LMT ID as lcore id */\n \tROC_LMT_BASE_ID_GET(laddr, lmt_id);\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tROC_LMT_CPT_BASE_ID_GET(c_laddr, c_lmt_id);\n+\t\tc_io_addr = txq->cpt_io_addr;\n+\t\tsa_base = txq->sa_base;\n+\t}\n+\n \tleft = pkts;\n again:\n \t/* Number of packets to prepare depends on offloads enabled. */\n \tburst = left > cn10k_nix_pkts_per_vec_brst(flags) ?\n \t\t\t      cn10k_nix_pkts_per_vec_brst(flags) :\n \t\t\t      left;\n-\tif (flags & NIX_TX_MULTI_SEG_F) {\n+\tif (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)) {\n \t\twd.data128 = 0;\n \t\tshift = 16;\n \t}\n \tlnum = 0;\n+\tif (NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tloff = 0;\n+\t\tc_loff = 0;\n+\t\tc_lnum = 0;\n+\t\tc_shft = 16;\n+\t}\n \n \tfor (i = 0; i < burst; i += NIX_DESCS_PER_LOOP) {\n+\t\tif (flags & NIX_TX_OFFLOAD_SECURITY_F && c_lnum + 2 > 16) {\n+\t\t\tburst = i;\n+\t\t\tbreak;\n+\t\t}\n+\n \t\tif (flags & NIX_TX_MULTI_SEG_F) {\n \t\t\tuint8_t j;\n \n@@ -1833,7 +2318,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t}\n \n \t\tif ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) &&\n-\t\t    !(flags & NIX_TX_MULTI_SEG_F)) {\n+\t\t    !(flags & NIX_TX_MULTI_SEG_F) &&\n+\t\t    !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {\n \t\t\t/* Set don't free bit if reference count > 1 */\n \t\t\txmask01 = vdupq_n_u64(0);\n \t\t\txmask23 = xmask01;\n@@ -1873,7 +2359,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\t\t\t(void **)&mbuf3, 1, 0);\n \t\t\tsenddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);\n \t\t\tsenddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);\n-\t\t} else if (!(flags & NIX_TX_MULTI_SEG_F)) {\n+\t\t} else if (!(flags & NIX_TX_MULTI_SEG_F) &&\n+\t\t\t   !(flags & NIX_TX_OFFLOAD_SECURITY_F)) {\n \t\t\t/* Move mbufs to iova */\n \t\t\tmbuf0 = (uint64_t *)tx_pkts[0];\n \t\t\tmbuf1 = (uint64_t *)tx_pkts[1];\n@@ -1918,7 +2405,84 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t\tcmd2[3] = vzip2q_u64(sendext23_w0, sendext23_w1);\n \t\t}\n \n-\t\tif (flags & NIX_TX_MULTI_SEG_F) {\n+\t\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\t\tconst uint64x2_t olf = {PKT_TX_SEC_OFFLOAD,\n+\t\t\t\t\t\tPKT_TX_SEC_OFFLOAD};\n+\t\t\tuintptr_t next;\n+\t\t\tuint8_t dw;\n+\n+\t\t\t/* Extract ol_flags. */\n+\t\t\txtmp128 = vzip1q_u64(len_olflags0, len_olflags1);\n+\t\t\tytmp128 = vzip1q_u64(len_olflags2, len_olflags3);\n+\n+\t\t\txtmp128 = vtstq_u64(olf, xtmp128);\n+\t\t\tytmp128 = vtstq_u64(olf, ytmp128);\n+\n+\t\t\t/* Process mbuf0 */\n+\t\t\tdw = cn10k_nix_tx_dwords(flags, segdw[0]);\n+\t\t\tif (vgetq_lane_u64(xtmp128, 0))\n+\t\t\t\tcn10k_nix_prep_sec_vec(tx_pkts[0], &cmd0[0],\n+\t\t\t\t\t\t       &cmd1[0], &next, c_laddr,\n+\t\t\t\t\t\t       &c_lnum, &c_loff,\n+\t\t\t\t\t\t       &c_shft, sa_base, flags);\n+\t\t\telse\n+\t\t\t\tcn10k_nix_lmt_next(dw, laddr, &lnum, &loff,\n+\t\t\t\t\t\t   &shift, &wd.data128, &next);\n+\n+\t\t\t/* Store mbuf0 to LMTLINE/CPT NIXTX area */\n+\t\t\tcn10k_nix_xmit_store(tx_pkts[0], segdw[0], next,\n+\t\t\t\t\t     cmd0[0], cmd1[0], cmd2[0], cmd3[0],\n+\t\t\t\t\t     flags);\n+\n+\t\t\t/* Process mbuf1 */\n+\t\t\tdw = cn10k_nix_tx_dwords(flags, segdw[1]);\n+\t\t\tif (vgetq_lane_u64(xtmp128, 1))\n+\t\t\t\tcn10k_nix_prep_sec_vec(tx_pkts[1], &cmd0[1],\n+\t\t\t\t\t\t       &cmd1[1], &next, c_laddr,\n+\t\t\t\t\t\t       &c_lnum, &c_loff,\n+\t\t\t\t\t\t       &c_shft, sa_base, flags);\n+\t\t\telse\n+\t\t\t\tcn10k_nix_lmt_next(dw, laddr, &lnum, &loff,\n+\t\t\t\t\t\t   &shift, &wd.data128, &next);\n+\n+\t\t\t/* Store mbuf1 to LMTLINE/CPT NIXTX area */\n+\t\t\tcn10k_nix_xmit_store(tx_pkts[1], segdw[1], next,\n+\t\t\t\t\t     cmd0[1], cmd1[1], cmd2[1], cmd3[1],\n+\t\t\t\t\t     flags);\n+\n+\t\t\t/* Process mbuf2 */\n+\t\t\tdw = cn10k_nix_tx_dwords(flags, segdw[2]);\n+\t\t\tif (vgetq_lane_u64(ytmp128, 0))\n+\t\t\t\tcn10k_nix_prep_sec_vec(tx_pkts[2], &cmd0[2],\n+\t\t\t\t\t\t       &cmd1[2], &next, c_laddr,\n+\t\t\t\t\t\t       &c_lnum, &c_loff,\n+\t\t\t\t\t\t       &c_shft, sa_base, flags);\n+\t\t\telse\n+\t\t\t\tcn10k_nix_lmt_next(dw, laddr, &lnum, &loff,\n+\t\t\t\t\t\t   &shift, &wd.data128, &next);\n+\n+\t\t\t/* Store mbuf2 to LMTLINE/CPT NIXTX area */\n+\t\t\tcn10k_nix_xmit_store(tx_pkts[2], segdw[2], next,\n+\t\t\t\t\t     cmd0[2], cmd1[2], cmd2[2], cmd3[2],\n+\t\t\t\t\t     flags);\n+\n+\t\t\t/* Process mbuf3 */\n+\t\t\tdw = cn10k_nix_tx_dwords(flags, segdw[3]);\n+\t\t\tif (vgetq_lane_u64(ytmp128, 1))\n+\t\t\t\tcn10k_nix_prep_sec_vec(tx_pkts[3], &cmd0[3],\n+\t\t\t\t\t\t       &cmd1[3], &next, c_laddr,\n+\t\t\t\t\t\t       &c_lnum, &c_loff,\n+\t\t\t\t\t\t       &c_shft, sa_base, flags);\n+\t\t\telse\n+\t\t\t\tcn10k_nix_lmt_next(dw, laddr, &lnum, &loff,\n+\t\t\t\t\t\t   &shift, &wd.data128, &next);\n+\n+\t\t\t/* Store mbuf3 to LMTLINE/CPT NIXTX area */\n+\t\t\tcn10k_nix_xmit_store(tx_pkts[3], segdw[3], next,\n+\t\t\t\t\t     cmd0[3], cmd1[3], cmd2[3], cmd3[3],\n+\t\t\t\t\t     flags);\n+\n+\t\t} else if (flags & NIX_TX_MULTI_SEG_F) {\n \t\t\tuint8_t j;\n \n \t\t\tsegdw[4] = 8;\n@@ -1982,21 +2546,35 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\ttx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;\n \t}\n \n-\tif (flags & NIX_TX_MULTI_SEG_F)\n+\t/* Roundup lnum to last line if it is partial */\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F) {\n+\t\tlnum = lnum + !!loff;\n+\t\twd.data128 = wd.data128 |\n+\t\t\t(((__uint128_t)(((loff >> 4) - 1) & 0x7) << shift));\n+\t}\n+\n+\tif (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))\n \t\twd.data[0] >>= 16;\n \n \tif (flags & NIX_TX_VWQE_F)\n \t\troc_sso_hws_head_wait(base);\n \n+\tleft -= burst;\n+\n+\t/* Submit CPT instructions if any */\n+\tif (flags & NIX_TX_OFFLOAD_SECURITY_F)\n+\t\tcn10k_nix_sec_steorl(c_io_addr, c_lmt_id, c_lnum, c_loff,\n+\t\t\t\t     c_shft);\n+\n \t/* Trigger LMTST */\n \tif (lnum > 16) {\n-\t\tif (!(flags & NIX_TX_MULTI_SEG_F))\n+\t\tif (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))\n \t\t\twd.data[0] = cn10k_nix_tx_steor_vec_data(flags);\n \n \t\tpa = io_addr | (wd.data[0] & 0x7) << 4;\n \t\twd.data[0] &= ~0x7ULL;\n \n-\t\tif (flags & NIX_TX_MULTI_SEG_F)\n+\t\tif (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))\n \t\t\twd.data[0] <<= 16;\n \n \t\twd.data[0] |= (15ULL << 12);\n@@ -2005,13 +2583,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t/* STEOR0 */\n \t\troc_lmt_submit_steorl(wd.data[0], pa);\n \n-\t\tif (!(flags & NIX_TX_MULTI_SEG_F))\n+\t\tif (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))\n \t\t\twd.data[1] = cn10k_nix_tx_steor_vec_data(flags);\n \n \t\tpa = io_addr | (wd.data[1] & 0x7) << 4;\n \t\twd.data[1] &= ~0x7ULL;\n \n-\t\tif (flags & NIX_TX_MULTI_SEG_F)\n+\t\tif (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))\n \t\t\twd.data[1] <<= 16;\n \n \t\twd.data[1] |= ((uint64_t)(lnum - 17)) << 12;\n@@ -2020,13 +2598,13 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\t/* STEOR1 */\n \t\troc_lmt_submit_steorl(wd.data[1], pa);\n \t} else if (lnum) {\n-\t\tif (!(flags & NIX_TX_MULTI_SEG_F))\n+\t\tif (!(flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F)))\n \t\t\twd.data[0] = cn10k_nix_tx_steor_vec_data(flags);\n \n \t\tpa = io_addr | (wd.data[0] & 0x7) << 4;\n \t\twd.data[0] &= ~0x7ULL;\n \n-\t\tif (flags & NIX_TX_MULTI_SEG_F)\n+\t\tif (flags & (NIX_TX_MULTI_SEG_F | NIX_TX_OFFLOAD_SECURITY_F))\n \t\t\twd.data[0] <<= 16;\n \n \t\twd.data[0] |= ((uint64_t)(lnum - 1)) << 12;\n@@ -2036,7 +2614,6 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n \t\troc_lmt_submit_steorl(wd.data[0], pa);\n \t}\n \n-\tleft -= burst;\n \trte_io_wmb();\n \tif (left)\n \t\tgoto again;\n@@ -2076,139 +2653,269 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,\n #define NOFF_F\t     NIX_TX_OFFLOAD_MBUF_NOFF_F\n #define TSO_F\t     NIX_TX_OFFLOAD_TSO_F\n #define TSP_F\t     NIX_TX_OFFLOAD_TSTAMP_F\n+#define T_SEC_F      NIX_TX_OFFLOAD_SECURITY_F\n \n-/* [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */\n+/* [T_SEC_F] [TSP] [TSO] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */\n #define NIX_TX_FASTPATH_MODES\t\t\t\t\t\t\\\n-T(no_offload,\t\t\t\t0, 0, 0, 0, 0, 0,\t4,\t\\\n+T(no_offload,\t\t\t\t0, 0, 0, 0, 0, 0, 0,\t4,\t\\\n \t\tNIX_TX_OFFLOAD_NONE)\t\t\t\t\t\\\n-T(l3l4csum,\t\t\t\t0, 0, 0, 0, 0, 1,\t4,\t\\\n+T(l3l4csum,\t\t\t\t0, 0, 0, 0, 0, 0, 1,\t4,\t\\\n \t\tL3L4CSUM_F)\t\t\t\t\t\t\\\n-T(ol3ol4csum,\t\t\t\t0, 0, 0, 0, 1, 0,\t4,\t\\\n+T(ol3ol4csum,\t\t\t\t0, 0, 0, 0, 0, 1, 0,\t4,\t\\\n \t\tOL3OL4CSUM_F)\t\t\t\t\t\t\\\n-T(ol3ol4csum_l3l4csum,\t\t\t0, 0, 0, 0, 1, 1,\t4,\t\\\n+T(ol3ol4csum_l3l4csum,\t\t\t0, 0, 0, 0, 0, 1, 1,\t4,\t\\\n \t\tOL3OL4CSUM_F | L3L4CSUM_F)\t\t\t\t\\\n-T(vlan,\t\t\t\t\t0, 0, 0, 1, 0, 0,\t6,\t\\\n+T(vlan,\t\t\t\t\t0, 0, 0, 0, 1, 0, 0,\t6,\t\\\n \t\tVLAN_F)\t\t\t\t\t\t\t\\\n-T(vlan_l3l4csum,\t\t\t0, 0, 0, 1, 0, 1,\t6,\t\\\n+T(vlan_l3l4csum,\t\t\t0, 0, 0, 0, 1, 0, 1,\t6,\t\\\n \t\tVLAN_F | L3L4CSUM_F)\t\t\t\t\t\\\n-T(vlan_ol3ol4csum,\t\t\t0, 0, 0, 1, 1, 0,\t6,\t\\\n+T(vlan_ol3ol4csum,\t\t\t0, 0, 0, 0, 1, 1, 0,\t6,\t\\\n \t\tVLAN_F | OL3OL4CSUM_F)\t\t\t\t\t\\\n-T(vlan_ol3ol4csum_l3l4csum,\t\t0, 0, 0, 1, 1, 1,\t6,\t\\\n+T(vlan_ol3ol4csum_l3l4csum,\t\t0, 0, 0, 0, 1, 1, 1,\t6,\t\\\n \t\tVLAN_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\t\t\\\n-T(noff,\t\t\t\t\t0, 0, 1, 0, 0, 0,\t4,\t\\\n+T(noff,\t\t\t\t\t0, 0, 0, 1, 0, 0, 0,\t4,\t\\\n \t\tNOFF_F)\t\t\t\t\t\t\t\\\n-T(noff_l3l4csum,\t\t\t0, 0, 1, 0, 0, 1,\t4,\t\\\n+T(noff_l3l4csum,\t\t\t0, 0, 0, 1, 0, 0, 1,\t4,\t\\\n \t\tNOFF_F | L3L4CSUM_F)\t\t\t\t\t\\\n-T(noff_ol3ol4csum,\t\t\t0, 0, 1, 0, 1, 0,\t4,\t\\\n+T(noff_ol3ol4csum,\t\t\t0, 0, 0, 1, 0, 1, 0,\t4,\t\\\n \t\tNOFF_F | OL3OL4CSUM_F)\t\t\t\t\t\\\n-T(noff_ol3ol4csum_l3l4csum,\t\t0, 0, 1, 0, 1, 1,\t4,\t\\\n+T(noff_ol3ol4csum_l3l4csum,\t\t0, 0, 0, 1, 0, 1, 1,\t4,\t\\\n \t\tNOFF_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\t\t\\\n-T(noff_vlan,\t\t\t\t0, 0, 1, 1, 0, 0,\t6,\t\\\n+T(noff_vlan,\t\t\t\t0, 0, 0, 1, 1, 0, 0,\t6,\t\\\n \t\tNOFF_F | VLAN_F)\t\t\t\t\t\\\n-T(noff_vlan_l3l4csum,\t\t\t0, 0, 1, 1, 0, 1,\t6,\t\\\n+T(noff_vlan_l3l4csum,\t\t\t0, 0, 0, 1, 1, 0, 1,\t6,\t\\\n \t\tNOFF_F | VLAN_F | L3L4CSUM_F)\t\t\t\t\\\n-T(noff_vlan_ol3ol4csum,\t\t\t0, 0, 1, 1, 1, 0,\t6,\t\\\n+T(noff_vlan_ol3ol4csum,\t\t\t0, 0, 0, 1, 1, 1, 0,\t6,\t\\\n \t\tNOFF_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\t\\\n-T(noff_vlan_ol3ol4csum_l3l4csum,\t0, 0, 1, 1, 1, 1,\t6,\t\\\n+T(noff_vlan_ol3ol4csum_l3l4csum,\t0, 0, 0, 1, 1, 1, 1,\t6,\t\\\n \t\tNOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\\\n-T(tso,\t\t\t\t\t0, 1, 0, 0, 0, 0,\t6,\t\\\n+T(tso,\t\t\t\t\t0, 0, 1, 0, 0, 0, 0,\t6,\t\\\n \t\tTSO_F)\t\t\t\t\t\t\t\\\n-T(tso_l3l4csum,\t\t\t\t0, 1, 0, 0, 0, 1,\t6,\t\\\n+T(tso_l3l4csum,\t\t\t\t0, 0, 1, 0, 0, 0, 1,\t6,\t\\\n \t\tTSO_F | L3L4CSUM_F)\t\t\t\t\t\\\n-T(tso_ol3ol4csum,\t\t\t0, 1, 0, 0, 1, 0,\t6,\t\\\n+T(tso_ol3ol4csum,\t\t\t0, 0, 1, 0, 0, 1, 0,\t6,\t\\\n \t\tTSO_F | OL3OL4CSUM_F)\t\t\t\t\t\\\n-T(tso_ol3ol4csum_l3l4csum,\t\t0, 1, 0, 0, 1, 1,\t6,\t\\\n+T(tso_ol3ol4csum_l3l4csum,\t\t0, 0, 1, 0, 0, 1, 1,\t6,\t\\\n \t\tTSO_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\t\\\n-T(tso_vlan,\t\t\t\t0, 1, 0, 1, 0, 0,\t6,\t\\\n+T(tso_vlan,\t\t\t\t0, 0, 1, 0, 1, 0, 0,\t6,\t\\\n \t\tTSO_F | VLAN_F)\t\t\t\t\t\t\\\n-T(tso_vlan_l3l4csum,\t\t\t0, 1, 0, 1, 0, 1,\t6,\t\\\n+T(tso_vlan_l3l4csum,\t\t\t0, 0, 1, 0, 1, 0, 1,\t6,\t\\\n \t\tTSO_F | VLAN_F | L3L4CSUM_F)\t\t\t\t\\\n-T(tso_vlan_ol3ol4csum,\t\t\t0, 1, 0, 1, 1, 0,\t6,\t\\\n+T(tso_vlan_ol3ol4csum,\t\t\t0, 0, 1, 0, 1, 1, 0,\t6,\t\\\n \t\tTSO_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\t\\\n-T(tso_vlan_ol3ol4csum_l3l4csum,\t\t0, 1, 0, 1, 1, 1,\t6,\t\\\n+T(tso_vlan_ol3ol4csum_l3l4csum,\t\t0, 0, 1, 0, 1, 1, 1,\t6,\t\\\n \t\tTSO_F | VLAN_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\t\\\n-T(tso_noff,\t\t\t\t0, 1, 1, 0, 0, 0,\t6,\t\\\n+T(tso_noff,\t\t\t\t0, 0, 1, 1, 0, 0, 0,\t6,\t\\\n \t\tTSO_F | NOFF_F)\t\t\t\t\t\t\\\n-T(tso_noff_l3l4csum,\t\t\t0, 1, 1, 0, 0, 1,\t6,\t\\\n+T(tso_noff_l3l4csum,\t\t\t0, 0, 1, 1, 0, 0, 1,\t6,\t\\\n \t\tTSO_F | NOFF_F | L3L4CSUM_F)\t\t\t\t\\\n-T(tso_noff_ol3ol4csum,\t\t\t0, 1, 1, 0, 1, 0,\t6,\t\\\n+T(tso_noff_ol3ol4csum,\t\t\t0, 0, 1, 1, 0, 1, 0,\t6,\t\\\n \t\tTSO_F | NOFF_F | OL3OL4CSUM_F)\t\t\t\t\\\n-T(tso_noff_ol3ol4csum_l3l4csum,\t\t0, 1, 1, 0, 1, 1,\t6,\t\\\n+T(tso_noff_ol3ol4csum_l3l4csum,\t\t0, 0, 1, 1, 0, 1, 1,\t6,\t\\\n \t\tTSO_F | NOFF_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\t\\\n-T(tso_noff_vlan,\t\t\t0, 1, 1, 1, 0, 0,\t6,\t\\\n+T(tso_noff_vlan,\t\t\t0, 0, 1, 1, 1, 0, 0,\t6,\t\\\n \t\tTSO_F | NOFF_F | VLAN_F)\t\t\t\t\\\n-T(tso_noff_vlan_l3l4csum,\t\t0, 1, 1, 1, 0, 1,\t6,\t\\\n+T(tso_noff_vlan_l3l4csum,\t\t0, 0, 1, 1, 1, 0, 1,\t6,\t\\\n \t\tTSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\t\t\\\n-T(tso_noff_vlan_ol3ol4csum,\t\t0, 1, 1, 1, 1, 0,\t6,\t\\\n+T(tso_noff_vlan_ol3ol4csum,\t\t0, 0, 1, 1, 1, 1, 0,\t6,\t\\\n \t\tTSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\\\n-T(tso_noff_vlan_ol3ol4csum_l3l4csum,\t0, 1, 1, 1, 1, 1,\t6,\t\\\n+T(tso_noff_vlan_ol3ol4csum_l3l4csum,\t0, 0, 1, 1, 1, 1, 1,\t6,\t\\\n \t\tTSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n-T(ts,\t\t\t\t\t1, 0, 0, 0, 0, 0,\t8,\t\\\n+T(ts,\t\t\t\t\t0, 1, 0, 0, 0, 0, 0,\t8,\t\\\n \t\tTSP_F)\t\t\t\t\t\t\t\\\n-T(ts_l3l4csum,\t\t\t\t1, 0, 0, 0, 0, 1,\t8,\t\\\n+T(ts_l3l4csum,\t\t\t\t0, 1, 0, 0, 0, 0, 1,\t8,\t\\\n \t\tTSP_F | L3L4CSUM_F)\t\t\t\t\t\\\n-T(ts_ol3ol4csum,\t\t\t1, 0, 0, 0, 1, 0,\t8,\t\\\n+T(ts_ol3ol4csum,\t\t\t0, 1, 0, 0, 0, 1, 0,\t8,\t\\\n \t\tTSP_F | OL3OL4CSUM_F)\t\t\t\t\t\\\n-T(ts_ol3ol4csum_l3l4csum,\t\t1, 0, 0, 0, 1, 1,\t8,\t\\\n+T(ts_ol3ol4csum_l3l4csum,\t\t0, 1, 0, 0, 0, 1, 1,\t8,\t\\\n \t\tTSP_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\t\\\n-T(ts_vlan,\t\t\t\t1, 0, 0, 1, 0, 0,\t8,\t\\\n+T(ts_vlan,\t\t\t\t0, 1, 0, 0, 1, 0, 0,\t8,\t\\\n \t\tTSP_F | VLAN_F)\t\t\t\t\t\t\\\n-T(ts_vlan_l3l4csum,\t\t\t1, 0, 0, 1, 0, 1,\t8,\t\\\n+T(ts_vlan_l3l4csum,\t\t\t0, 1, 0, 0, 1, 0, 1,\t8,\t\\\n \t\tTSP_F | VLAN_F | L3L4CSUM_F)\t\t\t\t\\\n-T(ts_vlan_ol3ol4csum,\t\t\t1, 0, 0, 1, 1, 0,\t8,\t\\\n+T(ts_vlan_ol3ol4csum,\t\t\t0, 1, 0, 0, 1, 1, 0,\t8,\t\\\n \t\tTSP_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\t\\\n-T(ts_vlan_ol3ol4csum_l3l4csum,\t\t1, 0, 0, 1, 1, 1,\t8,\t\\\n+T(ts_vlan_ol3ol4csum_l3l4csum,\t\t0, 1, 0, 0, 1, 1, 1,\t8,\t\\\n \t\tTSP_F | VLAN_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\t\\\n-T(ts_noff,\t\t\t\t1, 0, 1, 0, 0, 0,\t8,\t\\\n+T(ts_noff,\t\t\t\t0, 1, 0, 1, 0, 0, 0,\t8,\t\\\n \t\tTSP_F | NOFF_F)\t\t\t\t\t\t\\\n-T(ts_noff_l3l4csum,\t\t\t1, 0, 1, 0, 0, 1,\t8,\t\\\n+T(ts_noff_l3l4csum,\t\t\t0, 1, 0, 1, 0, 0, 1,\t8,\t\\\n \t\tTSP_F | NOFF_F | L3L4CSUM_F)\t\t\t\t\\\n-T(ts_noff_ol3ol4csum,\t\t\t1, 0, 1, 0, 1, 0,\t8,\t\\\n+T(ts_noff_ol3ol4csum,\t\t\t0, 1, 0, 1, 0, 1, 0,\t8,\t\\\n \t\tTSP_F | NOFF_F | OL3OL4CSUM_F)\t\t\t\t\\\n-T(ts_noff_ol3ol4csum_l3l4csum,\t\t1, 0, 1, 0, 1, 1,\t8,\t\\\n+T(ts_noff_ol3ol4csum_l3l4csum,\t\t0, 1, 0, 1, 0, 1, 1,\t8,\t\\\n \t\tTSP_F | NOFF_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\t\\\n-T(ts_noff_vlan,\t\t\t\t1, 0, 1, 1, 0, 0,\t8,\t\\\n+T(ts_noff_vlan,\t\t\t\t0, 1, 0, 1, 1, 0, 0,\t8,\t\\\n \t\tTSP_F | NOFF_F | VLAN_F)\t\t\t\t\\\n-T(ts_noff_vlan_l3l4csum,\t\t1, 0, 1, 1, 0, 1,\t8,\t\\\n+T(ts_noff_vlan_l3l4csum,\t\t0, 1, 0, 1, 1, 0, 1,\t8,\t\\\n \t\tTSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\t\t\\\n-T(ts_noff_vlan_ol3ol4csum,\t\t1, 0, 1, 1, 1, 0,\t8,\t\\\n+T(ts_noff_vlan_ol3ol4csum,\t\t0, 1, 0, 1, 1, 1, 0,\t8,\t\\\n \t\tTSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\\\n-T(ts_noff_vlan_ol3ol4csum_l3l4csum,\t1, 0, 1, 1, 1, 1,\t8,\t\\\n+T(ts_noff_vlan_ol3ol4csum_l3l4csum,\t0, 1, 0, 1, 1, 1, 1,\t8,\t\\\n \t\tTSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n-T(ts_tso,\t\t\t\t1, 1, 0, 0, 0, 0,\t8,\t\\\n+T(ts_tso,\t\t\t\t0, 1, 1, 0, 0, 0, 0,\t8,\t\\\n \t\tTSP_F | TSO_F)\t\t\t\t\t\t\\\n-T(ts_tso_l3l4csum,\t\t\t1, 1, 0, 0, 0, 1,\t8,\t\\\n+T(ts_tso_l3l4csum,\t\t\t0, 1, 1, 0, 0, 0, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | L3L4CSUM_F)\t\t\t\t\\\n-T(ts_tso_ol3ol4csum,\t\t\t1, 1, 0, 0, 1, 0,\t8,\t\\\n+T(ts_tso_ol3ol4csum,\t\t\t0, 1, 1, 0, 0, 1, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | OL3OL4CSUM_F)\t\t\t\t\\\n-T(ts_tso_ol3ol4csum_l3l4csum,\t\t1, 1, 0, 0, 1, 1,\t8,\t\\\n+T(ts_tso_ol3ol4csum_l3l4csum,\t\t0, 1, 1, 0, 0, 1, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\\\n-T(ts_tso_vlan,\t\t\t\t1, 1, 0, 1, 0, 0,\t8,\t\\\n+T(ts_tso_vlan,\t\t\t\t0, 1, 1, 0, 1, 0, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | VLAN_F)\t\t\t\t\t\\\n-T(ts_tso_vlan_l3l4csum,\t\t\t1, 1, 0, 1, 0, 1,\t8,\t\\\n+T(ts_tso_vlan_l3l4csum,\t\t\t0, 1, 1, 0, 1, 0, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | VLAN_F | L3L4CSUM_F)\t\t\t\\\n-T(ts_tso_vlan_ol3ol4csum,\t\t1, 1, 0, 1, 1, 0,\t8,\t\\\n+T(ts_tso_vlan_ol3ol4csum,\t\t0, 1, 1, 0, 1, 1, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\\\n-T(ts_tso_vlan_ol3ol4csum_l3l4csum,\t1, 1, 0, 1, 1, 1,\t8,\t\\\n+T(ts_tso_vlan_ol3ol4csum_l3l4csum,\t0, 1, 1, 0, 1, 1, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\\\n-T(ts_tso_noff,\t\t\t\t1, 1, 1, 0, 0, 0,\t8,\t\\\n+T(ts_tso_noff,\t\t\t\t0, 1, 1, 1, 0, 0, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F)\t\t\t\t\t\\\n-T(ts_tso_noff_l3l4csum,\t\t\t1, 1, 1, 0, 0, 1,\t8,\t\\\n+T(ts_tso_noff_l3l4csum,\t\t\t0, 1, 1, 1, 0, 0, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F | L3L4CSUM_F)\t\t\t\\\n-T(ts_tso_noff_ol3ol4csum,\t\t1, 1, 1, 0, 1, 0,\t8,\t\\\n+T(ts_tso_noff_ol3ol4csum,\t\t0, 1, 1, 1, 0, 1, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)\t\t\t\\\n-T(ts_tso_noff_ol3ol4csum_l3l4csum,\t1, 1, 1, 0, 1, 1,\t8,\t\\\n+T(ts_tso_noff_ol3ol4csum_l3l4csum,\t0, 1, 1, 1, 0, 1, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\\\n-T(ts_tso_noff_vlan,\t\t\t1, 1, 1, 1, 0, 0,\t8,\t\\\n+T(ts_tso_noff_vlan,\t\t\t0, 1, 1, 1, 1, 0, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F | VLAN_F)\t\t\t\\\n-T(ts_tso_noff_vlan_l3l4csum,\t\t1, 1, 1, 1, 0, 1,\t8,\t\\\n+T(ts_tso_noff_vlan_l3l4csum,\t\t0, 1, 1, 1, 1, 0, 1,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\t\\\n-T(ts_tso_noff_vlan_ol3ol4csum,\t\t1, 1, 1, 1, 1, 0,\t8,\t\\\n+T(ts_tso_noff_vlan_ol3ol4csum,\t\t0, 1, 1, 1, 1, 1, 0,\t8,\t\\\n \t\tTSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\t\t\\\n-T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,\t1, 1, 1, 1, 1, 1,\t8,\t\\\n-\t\tTSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\n+T(ts_tso_noff_vlan_ol3ol4csum_l3l4csum,\t0, 1, 1, 1, 1, 1, 1,\t8,\t\\\n+\t\tTSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\\\n+T(sec,\t\t\t\t\t1, 0, 0, 0, 0, 0, 0,\t4,\t\\\n+\t\tT_SEC_F)\t\t\t\t\t\t\\\n+T(sec_l3l4csum,\t\t\t\t1, 0, 0, 0, 0, 0, 1,\t4,\t\\\n+\t\tT_SEC_F | L3L4CSUM_F)\t\t\t\t\t\\\n+T(sec_ol3ol4csum,\t\t\t1, 0, 0, 0, 0, 1, 0,\t4,\t\\\n+\t\tT_SEC_F | OL3OL4CSUM_F)\t\t\t\t\t\\\n+T(sec_ol3ol4csum_l3l4csum,\t\t1, 0, 0, 0, 0, 1, 1,\t4,\t\\\n+\t\tT_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_vlan,\t\t\t\t1, 0, 0, 0, 1, 0, 0,\t6,\t\\\n+\t\tT_SEC_F | VLAN_F)\t\t\t\t\t\\\n+T(sec_vlan_l3l4csum,\t\t\t1, 0, 0, 0, 1, 0, 1,\t6,\t\\\n+\t\tT_SEC_F | VLAN_F | L3L4CSUM_F)\t\t\t\t\\\n+T(sec_vlan_ol3ol4csum,\t\t\t1, 0, 0, 0, 1, 1, 0,\t6,\t\\\n+\t\tT_SEC_F | VLAN_F | OL3OL4CSUM_F)\t\t\t\\\n+T(sec_vlan_ol3ol4csum_l3l4csum,\t\t1, 0, 0, 0, 1, 1, 1,\t6,\t\\\n+\t\tT_SEC_F | VLAN_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\\\n+T(sec_noff,\t\t\t\t1, 0, 0, 1, 0, 0, 0,\t4,\t\\\n+\t\tT_SEC_F | NOFF_F)\t\t\t\t\t\\\n+T(sec_noff_l3l4csum,\t\t\t1, 0, 0, 1, 0, 0, 1,\t4,\t\\\n+\t\tT_SEC_F | NOFF_F | L3L4CSUM_F)\t\t\t\t\\\n+T(sec_noff_ol3ol4csum,\t\t\t1, 0, 0, 1, 0, 1, 0,\t4,\t\\\n+\t\tT_SEC_F | NOFF_F | OL3OL4CSUM_F)\t\t\t\\\n+T(sec_noff_ol3ol4csum_l3l4csum,\t\t1, 0, 0, 1, 0, 1, 1,\t4,\t\\\n+\t\tT_SEC_F | NOFF_F | OL3OL4CSUM_F |\tL3L4CSUM_F)\t\\\n+T(sec_noff_vlan,\t\t\t1, 0, 0, 1, 1, 0, 0,\t6,\t\\\n+\t\tT_SEC_F | NOFF_F | VLAN_F)\t\t\t\t\\\n+T(sec_noff_vlan_l3l4csum,\t\t1, 0, 0, 1, 1, 0, 1,\t6,\t\\\n+\t\tT_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_noff_vlan_ol3ol4csum,\t\t1, 0, 0, 1, 1, 1, 0,\t6,\t\\\n+\t\tT_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\t\t\\\n+T(sec_noff_vlan_ol3ol4csum_l3l4csum,\t1, 0, 0, 1, 1, 1, 1,\t6,\t\\\n+\t\tT_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n+T(sec_tso,\t\t\t\t1, 0, 1, 0, 0, 0, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F)\t\t\t\t\t\\\n+T(sec_tso_l3l4csum,\t\t\t1, 0, 1, 0, 0, 0, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | L3L4CSUM_F)\t\t\t\t\\\n+T(sec_tso_ol3ol4csum,\t\t\t1, 0, 1, 0, 0, 1, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | OL3OL4CSUM_F)\t\t\t\t\\\n+T(sec_tso_ol3ol4csum_l3l4csum,\t\t1, 0, 1, 0, 0, 1, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\\\n+T(sec_tso_vlan,\t\t\t\t1, 0, 1, 0, 1, 0, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | VLAN_F)\t\t\t\t\\\n+T(sec_tso_vlan_l3l4csum,\t\t1, 0, 1, 0, 1, 0, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_tso_vlan_ol3ol4csum,\t\t1, 0, 1, 0, 1, 1, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F)\t\t\\\n+T(sec_tso_vlan_ol3ol4csum_l3l4csum,\t1, 0, 1, 0, 1, 1, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n+T(sec_tso_noff,\t\t\t\t1, 0, 1, 1, 0, 0, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F)\t\t\t\t\\\n+T(sec_tso_noff_l3l4csum,\t\t1, 0, 1, 1, 0, 0, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_tso_noff_ol3ol4csum,\t\t1, 0, 1, 1, 0, 1, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F)\t\t\\\n+T(sec_tso_noff_ol3ol4csum_l3l4csum,\t1, 0, 1, 1, 0, 1, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n+T(sec_tso_noff_vlan,\t\t\t1, 0, 1, 1, 1, 0, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | VLAN_F)\t\t\t\\\n+T(sec_tso_noff_vlan_l3l4csum,\t\t1, 0, 1, 1, 1, 0, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\t\\\n+T(sec_tso_noff_vlan_ol3ol4csum,\t\t1, 0, 1, 1, 1, 1, 0,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\t\\\n+T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1,\t6,\t\\\n+\t\tT_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\\\n+T(sec_ts,\t\t\t\t1, 1, 0, 0, 0, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F)\t\t\t\t\t\\\n+T(sec_ts_l3l4csum,\t\t\t1, 1, 0, 0, 0, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | L3L4CSUM_F)\t\t\t\t\\\n+T(sec_ts_ol3ol4csum,\t\t\t1, 1, 0, 0, 0, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | OL3OL4CSUM_F)\t\t\t\t\\\n+T(sec_ts_ol3ol4csum_l3l4csum,\t\t1, 1, 0, 0, 0, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\t\\\n+T(sec_ts_vlan,\t\t\t\t1, 1, 0, 0, 1, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | VLAN_F)\t\t\t\t\\\n+T(sec_ts_vlan_l3l4csum,\t\t\t1, 1, 0, 0, 1, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_ts_vlan_ol3ol4csum,\t\t1, 1, 0, 0, 1, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F)\t\t\\\n+T(sec_ts_vlan_ol3ol4csum_l3l4csum,\t1, 1, 0, 0, 1, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n+T(sec_ts_noff,\t\t\t\t1, 1, 0, 1, 0, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F)\t\t\t\t\\\n+T(sec_ts_noff_l3l4csum,\t\t\t1, 1, 0, 1, 0, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_ts_noff_ol3ol4csum,\t\t1, 1, 0, 1, 0, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F)\t\t\\\n+T(sec_ts_noff_ol3ol4csum_l3l4csum,\t1, 1, 0, 1, 0, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n+T(sec_ts_noff_vlan,\t\t\t1, 1, 0, 1, 1, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | VLAN_F)\t\t\t\\\n+T(sec_ts_noff_vlan_l3l4csum,\t\t1, 1, 0, 1, 1, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\t\\\n+T(sec_ts_noff_vlan_ol3ol4csum,\t\t1, 1, 0, 1, 1, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\t\\\n+T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum,\t1, 1, 0, 1, 1, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F)\\\n+T(sec_ts_tso,\t\t\t\t1, 1, 1, 0, 0, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F)\t\t\t\t\\\n+T(sec_ts_tso_l3l4csum,\t\t\t1, 1, 1, 0, 0, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | L3L4CSUM_F)\t\t\t\\\n+T(sec_ts_tso_ol3ol4csum,\t\t1, 1, 1, 0, 0, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F)\t\t\t\\\n+T(sec_ts_tso_ol3ol4csum_l3l4csum,\t1, 1, 1, 0, 0, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F)\t\\\n+T(sec_ts_tso_vlan,\t\t\t1, 1, 1, 0, 1, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | VLAN_F)\t\t\t\\\n+T(sec_ts_tso_vlan_l3l4csum,\t\t1, 1, 1, 0, 1, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | VLAN_F | L3L4CSUM_F)\t\t\\\n+T(sec_ts_tso_vlan_ol3ol4csum,\t\t1, 1, 1, 0, 1, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F)\t\\\n+T(sec_ts_tso_vlan_ol3ol4csum_l3l4csum,\t1, 1, 1, 0, 1, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \\\n+T(sec_ts_tso_noff,\t\t\t1, 1, 1, 1, 0, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F)\t\t\t\\\n+T(sec_ts_tso_noff_l3l4csum,\t\t1, 1, 1, 1, 0, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | L3L4CSUM_F)\t\t\\\n+T(sec_ts_tso_noff_ol3ol4csum,\t\t1, 1, 1, 1, 0, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F)\t\\\n+T(sec_ts_tso_noff_ol3ol4csum_l3l4csum,\t1, 1, 1, 1, 0, 1, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F)\\\n+T(sec_ts_tso_noff_vlan,\t\t\t1, 1, 1, 1, 1, 0, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F)\t\t\\\n+T(sec_ts_tso_noff_vlan_l3l4csum,\t1, 1, 1, 1, 1, 0, 1,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F)\t\\\n+T(sec_ts_tso_noff_vlan_ol3ol4csum,\t1, 1, 1, 1, 1, 1, 0,\t8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F)\\\n+T(sec_ts_tso_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 1, 1, 8,\t\\\n+\t\tT_SEC_F | TSP_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \\\n+\t\tL3L4CSUM_F)\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n \tuint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_##name(          \\\n \t\tvoid *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts);     \\\n \t\t\t\t\t\t\t\t\t       \\\ndiff --git a/drivers/net/cnxk/cn10k_tx_mseg.c b/drivers/net/cnxk/cn10k_tx_mseg.c\nindex 4ea4c8a..2b83409 100644\n--- a/drivers/net/cnxk/cn10k_tx_mseg.c\n+++ b/drivers/net/cnxk/cn10k_tx_mseg.c\n@@ -5,7 +5,7 @@\n #include \"cn10k_ethdev.h\"\n #include \"cn10k_tx.h\"\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n \tuint16_t __rte_noinline __rte_hot\t\t\t\t       \\\n \t\tcn10k_nix_xmit_pkts_mseg_##name(void *tx_queue,                \\\n \t\t\t\t\t\tstruct rte_mbuf **tx_pkts,     \\\ndiff --git a/drivers/net/cnxk/cn10k_tx_vec.c b/drivers/net/cnxk/cn10k_tx_vec.c\nindex a035049..2789b13 100644\n--- a/drivers/net/cnxk/cn10k_tx_vec.c\n+++ b/drivers/net/cnxk/cn10k_tx_vec.c\n@@ -5,7 +5,7 @@\n #include \"cn10k_ethdev.h\"\n #include \"cn10k_tx.h\"\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)\t\t\t       \\\n \tuint16_t __rte_noinline __rte_hot\t\t\t\t       \\\n \t\tcn10k_nix_xmit_pkts_vec_##name(void *tx_queue,                 \\\n \t\t\t\t\t       struct rte_mbuf **tx_pkts,      \\\ndiff --git a/drivers/net/cnxk/cn10k_tx_vec_mseg.c b/drivers/net/cnxk/cn10k_tx_vec_mseg.c\nindex 7f98f79..98000df 100644\n--- a/drivers/net/cnxk/cn10k_tx_vec_mseg.c\n+++ b/drivers/net/cnxk/cn10k_tx_vec_mseg.c\n@@ -5,7 +5,7 @@\n #include \"cn10k_ethdev.h\"\n #include \"cn10k_tx.h\"\n \n-#define T(name, f5, f4, f3, f2, f1, f0, sz, flags)                             \\\n+#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags)                         \\\n \tuint16_t __rte_noinline __rte_hot cn10k_nix_xmit_pkts_vec_mseg_##name( \\\n \t\tvoid *tx_queue, struct rte_mbuf **tx_pkts, uint16_t pkts)      \\\n \t{                                                                      \\\n",
    "prefixes": [
        "20/27"
    ]
}