get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/97741/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 97741,
    "url": "https://patches.dpdk.org/api/patches/97741/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/20210902021505.17607-16-ndabilpuram@marvell.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20210902021505.17607-16-ndabilpuram@marvell.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20210902021505.17607-16-ndabilpuram@marvell.com",
    "date": "2021-09-02T02:14:53",
    "name": "[15/27] net/cnxk: add inline security support for cn9k",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "c3fdf048d039138b173ac25f339b97b90dedf79e",
    "submitter": {
        "id": 1202,
        "url": "https://patches.dpdk.org/api/people/1202/?format=api",
        "name": "Nithin Dabilpuram",
        "email": "ndabilpuram@marvell.com"
    },
    "delegate": {
        "id": 310,
        "url": "https://patches.dpdk.org/api/users/310/?format=api",
        "username": "jerin",
        "first_name": "Jerin",
        "last_name": "Jacob",
        "email": "jerinj@marvell.com"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/20210902021505.17607-16-ndabilpuram@marvell.com/mbox/",
    "series": [
        {
            "id": 18612,
            "url": "https://patches.dpdk.org/api/series/18612/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=18612",
            "date": "2021-09-02T02:14:38",
            "name": "net/cnxk: support for inline ipsec",
            "version": 1,
            "mbox": "https://patches.dpdk.org/series/18612/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/97741/comments/",
    "check": "success",
    "checks": "https://patches.dpdk.org/api/patches/97741/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id E020BA0C4C;\n\tThu,  2 Sep 2021 04:18:26 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 2125241134;\n\tThu,  2 Sep 2021 04:17:34 +0200 (CEST)",
            "from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com\n [67.231.148.174])\n by mails.dpdk.org (Postfix) with ESMTP id E852E41144\n for <dev@dpdk.org>; Thu,  2 Sep 2021 04:17:32 +0200 (CEST)",
            "from pps.filterd (m0045849.ppops.net [127.0.0.1])\n by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id\n 181K5Dsm025890;\n Wed, 1 Sep 2021 19:17:29 -0700",
            "from dc5-exch01.marvell.com ([199.233.59.181])\n by mx0a-0016f401.pphosted.com with ESMTP id 3atg8a91jx-1\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);\n Wed, 01 Sep 2021 19:17:29 -0700",
            "from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18;\n Wed, 1 Sep 2021 19:17:27 -0700",
            "from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com\n (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend\n Transport; Wed, 1 Sep 2021 19:17:27 -0700",
            "from hyd1588t430.marvell.com (unknown [10.29.52.204])\n by maili.marvell.com (Postfix) with ESMTP id 0935B3F704B;\n Wed,  1 Sep 2021 19:17:24 -0700 (PDT)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;\n h=from : to : cc :\n subject : date : message-id : in-reply-to : references : mime-version :\n content-type; s=pfpt0220; bh=0P24A4vAHMg79ZQzx8TKPqmKM4H77Yc5saFJiFYAVaE=;\n b=kEjbWBBy9yVxuthOxEKBd4zOmT23T2Jq6GWMHVF3nUr3/GfzMq/nFGOuKHxeUbDzp+gO\n 3n80wjQE3K2SF7lLUZByeq5Engw1OHGg5pCm2JkvyPiHMcuiUIezPxc45KZthpGmYmsf\n 365ImBPsTmqlqzS5m754wpHZ1e6swKu6DlId0gcbg2Y/hScm3s3Hn4ljTMwNIBMj5sIc\n HqG2GvsqFZ9uGl2CgXTBBH1mH5MaykQ9DSsf1EL1NY7aopkIli3Pzl8gmrpC27N80Cbz\n Dc30nkw+eeOclVvQbdQ7HNyT+tH431LtGuU09DNPl+WIFKHLYBzztTFm6YLZ/rJciggD tA==",
        "From": "Nithin Dabilpuram <ndabilpuram@marvell.com>",
        "To": "Nithin Dabilpuram <ndabilpuram@marvell.com>, Kiran Kumar K\n <kirankumark@marvell.com>, Sunil Kumar Kori <skori@marvell.com>, Satha Rao\n <skoteshwar@marvell.com>, Ray Kinsella <mdr@ashroe.eu>, Anatoly Burakov\n <anatoly.burakov@intel.com>",
        "CC": "<jerinj@marvell.com>, <schalla@marvell.com>, <dev@dpdk.org>",
        "Date": "Thu, 2 Sep 2021 07:44:53 +0530",
        "Message-ID": "<20210902021505.17607-16-ndabilpuram@marvell.com>",
        "X-Mailer": "git-send-email 2.8.4",
        "In-Reply-To": "<20210902021505.17607-1-ndabilpuram@marvell.com>",
        "References": "<20210902021505.17607-1-ndabilpuram@marvell.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-Proofpoint-GUID": "tNrvjEINmJQ4F0hXQVtKUL_dmVntHl93",
        "X-Proofpoint-ORIG-GUID": "tNrvjEINmJQ4F0hXQVtKUL_dmVntHl93",
        "X-Proofpoint-Virus-Version": "vendor=baseguard\n engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475\n definitions=2021-09-01_05,2021-09-01_01,2020-04-07_01",
        "Subject": "[dpdk-dev] [PATCH 15/27] net/cnxk: add inline security support for\n cn9k",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Add support for inline inbound and outbound IPSec for SA create,\ndestroy and other NIX / CPT LF configurations.\n\nSigned-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>\n---\n drivers/net/cnxk/cn9k_ethdev.c         |  23 +++\n drivers/net/cnxk/cn9k_ethdev.h         |  61 +++++++\n drivers/net/cnxk/cn9k_ethdev_sec.c     | 313 +++++++++++++++++++++++++++++++++\n drivers/net/cnxk/cn9k_rx.h             |   1 +\n drivers/net/cnxk/cn9k_tx.h             |   1 +\n drivers/net/cnxk/cnxk_ethdev.c         | 214 +++++++++++++++++++++-\n drivers/net/cnxk/cnxk_ethdev.h         | 121 ++++++++++++-\n drivers/net/cnxk/cnxk_ethdev_devargs.c |  88 ++++++++-\n drivers/net/cnxk/cnxk_ethdev_sec.c     | 278 +++++++++++++++++++++++++++++\n drivers/net/cnxk/cnxk_lookup.c         |  50 +++++-\n drivers/net/cnxk/meson.build           |   2 +\n drivers/net/cnxk/version.map           |   5 +\n 12 files changed, 1146 insertions(+), 11 deletions(-)\n create mode 100644 drivers/net/cnxk/cn9k_ethdev_sec.c\n create mode 100644 drivers/net/cnxk/cnxk_ethdev_sec.c",
    "diff": "diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c\nindex 115e678..08c86f9 100644\n--- a/drivers/net/cnxk/cn9k_ethdev.c\n+++ b/drivers/net/cnxk/cn9k_ethdev.c\n@@ -36,6 +36,9 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)\n \tif (!dev->ptype_disable)\n \t\tflags |= NIX_RX_OFFLOAD_PTYPE_F;\n \n+\tif (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)\n+\t\tflags |= NIX_RX_OFFLOAD_SECURITY_F;\n+\n \treturn flags;\n }\n \n@@ -101,6 +104,9 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)\n \tif ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))\n \t\tflags |= NIX_TX_OFFLOAD_TSTAMP_F;\n \n+\tif (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)\n+\t\tflags |= NIX_TX_OFFLOAD_SECURITY_F;\n+\n \treturn flags;\n }\n \n@@ -179,8 +185,10 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \t\t\tconst struct rte_eth_txconf *tx_conf)\n {\n \tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n+\tstruct roc_cpt_lf *inl_lf;\n \tstruct cn9k_eth_txq *txq;\n \tstruct roc_nix_sq *sq;\n+\tuint16_t crypto_qid;\n \tint rc;\n \n \tRTE_SET_USED(socket);\n@@ -200,6 +208,19 @@ cn9k_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \ttxq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj;\n \ttxq->sqes_per_sqb_log2 = sq->sqes_per_sqb_log2;\n \n+\t/* Fetch CPT LF info for outbound if present */\n+\tif (dev->outb.lf_base) {\n+\t\tcrypto_qid = qid % dev->outb.nb_crypto_qs;\n+\t\tinl_lf = dev->outb.lf_base + crypto_qid;\n+\n+\t\ttxq->cpt_io_addr = inl_lf->io_addr;\n+\t\ttxq->cpt_fc = inl_lf->fc_addr;\n+\t\ttxq->cpt_desc = inl_lf->nb_desc * 0.7;\n+\t\ttxq->sa_base = (uint64_t)dev->outb.sa_base;\n+\t\ttxq->sa_base |= eth_dev->data->port_id;\n+\t\tPLT_STATIC_ASSERT(BIT_ULL(16) == ROC_NIX_INL_SA_BASE_ALIGN);\n+\t}\n+\n \tnix_form_default_desc(dev, txq, qid);\n \ttxq->lso_tun_fmt = dev->lso_tun_fmt;\n \treturn 0;\n@@ -508,6 +529,8 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)\n \tnix_eth_dev_ops_override();\n \tnpc_flow_ops_override();\n \n+\tcn9k_eth_sec_ops_override();\n+\n \t/* Common probe */\n \trc = cnxk_nix_probe(pci_drv, pci_dev);\n \tif (rc)\ndiff --git a/drivers/net/cnxk/cn9k_ethdev.h b/drivers/net/cnxk/cn9k_ethdev.h\nindex 3d4a206..f8818b8 100644\n--- a/drivers/net/cnxk/cn9k_ethdev.h\n+++ b/drivers/net/cnxk/cn9k_ethdev.h\n@@ -5,6 +5,7 @@\n #define __CN9K_ETHDEV_H__\n \n #include <cnxk_ethdev.h>\n+#include <cnxk_security.h>\n \n struct cn9k_eth_txq {\n \tuint64_t cmd[8];\n@@ -15,6 +16,10 @@ struct cn9k_eth_txq {\n \tuint64_t lso_tun_fmt;\n \tuint16_t sqes_per_sqb_log2;\n \tint16_t nb_sqb_bufs_adj;\n+\trte_iova_t cpt_io_addr;\n+\tuint64_t sa_base;\n+\tuint64_t *cpt_fc;\n+\tuint16_t cpt_desc;\n } __plt_cache_aligned;\n \n struct cn9k_eth_rxq {\n@@ -32,8 +37,64 @@ struct cn9k_eth_rxq {\n \tstruct cnxk_timesync_info *tstamp;\n } __plt_cache_aligned;\n \n+/* Private data in sw rsvd area of struct roc_onf_ipsec_inb_sa */\n+struct cn9k_inb_priv_data {\n+\tvoid *userdata;\n+\tstruct cnxk_eth_sec_sess *eth_sec;\n+};\n+\n+/* Private data in sw rsvd area of struct roc_onf_ipsec_outb_sa */\n+struct cn9k_outb_priv_data {\n+\tunion {\n+\t\tuint64_t esn;\n+\t\tstruct {\n+\t\t\tuint32_t seq;\n+\t\t\tuint32_t esn_hi;\n+\t\t};\n+\t};\n+\n+\t/* Rlen computation data */\n+\tstruct cnxk_ipsec_outb_rlens rlens;\n+\n+\t/* IP identifier */\n+\tuint16_t ip_id;\n+\n+\t/* SA index */\n+\tuint32_t sa_idx;\n+\n+\t/* Flags */\n+\tuint16_t copy_salt : 1;\n+\n+\t/* Salt */\n+\tuint32_t nonce;\n+\n+\t/* User data pointer */\n+\tvoid *userdata;\n+\n+\t/* Back pointer to eth sec session */\n+\tstruct cnxk_eth_sec_sess *eth_sec;\n+};\n+\n+struct cn9k_sec_sess_priv {\n+\tunion {\n+\t\tstruct {\n+\t\t\tuint32_t sa_idx;\n+\t\t\tuint8_t inb_sa : 1;\n+\t\t\tuint8_t rsvd1 : 2;\n+\t\t\tuint8_t roundup_byte : 5;\n+\t\t\tuint8_t roundup_len;\n+\t\t\tuint16_t partial_len;\n+\t\t};\n+\n+\t\tuint64_t u64;\n+\t};\n+} __rte_packed;\n+\n /* Rx and Tx routines */\n void cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev);\n void cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev);\n \n+/* Security context setup */\n+void cn9k_eth_sec_ops_override(void);\n+\n #endif /* __CN9K_ETHDEV_H__ */\ndiff --git a/drivers/net/cnxk/cn9k_ethdev_sec.c b/drivers/net/cnxk/cn9k_ethdev_sec.c\nnew file mode 100644\nindex 0000000..3ec7497\n--- /dev/null\n+++ b/drivers/net/cnxk/cn9k_ethdev_sec.c\n@@ -0,0 +1,313 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright(C) 2021 Marvell.\n+ */\n+\n+#include <rte_cryptodev.h>\n+#include <rte_security.h>\n+#include <rte_security_driver.h>\n+\n+#include <cn9k_ethdev.h>\n+#include <cnxk_security.h>\n+\n+static struct rte_cryptodev_capabilities cn9k_eth_sec_crypto_caps[] = {\n+\t{\t/* AES GCM */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,\n+\t\t\t{.aead = {\n+\t\t\t\t.algo = RTE_CRYPTO_AEAD_AES_GCM,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.aad_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 12,\n+\t\t\t\t\t.increment = 4\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 12,\n+\t\t\t\t\t.max = 12,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\tRTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()\n+};\n+\n+static const struct rte_security_capability cn9k_eth_sec_capabilities[] = {\n+\t{\t/* IPsec Inline Protocol ESP Tunnel Ingress */\n+\t\t.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,\n+\t\t.protocol = RTE_SECURITY_PROTOCOL_IPSEC,\n+\t\t.ipsec = {\n+\t\t\t.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,\n+\t\t\t.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,\n+\t\t\t.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,\n+\t\t\t.options = { 0 }\n+\t\t},\n+\t\t.crypto_capabilities = cn9k_eth_sec_crypto_caps,\n+\t\t.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA\n+\t},\n+\t{\t/* IPsec Inline Protocol ESP Tunnel Egress */\n+\t\t.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,\n+\t\t.protocol = RTE_SECURITY_PROTOCOL_IPSEC,\n+\t\t.ipsec = {\n+\t\t\t.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,\n+\t\t\t.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,\n+\t\t\t.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,\n+\t\t\t.options = { 0 }\n+\t\t},\n+\t\t.crypto_capabilities = cn9k_eth_sec_crypto_caps,\n+\t\t.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA\n+\t},\n+\t{\n+\t\t.action = RTE_SECURITY_ACTION_TYPE_NONE\n+\t}\n+};\n+\n+static int\n+cn9k_eth_sec_session_create(void *device,\n+\t\t\t    struct rte_security_session_conf *conf,\n+\t\t\t    struct rte_security_session *sess,\n+\t\t\t    struct rte_mempool *mempool)\n+{\n+\tstruct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;\n+\tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n+\tstruct rte_security_ipsec_xform *ipsec;\n+\tstruct cn9k_sec_sess_priv sess_priv;\n+\tstruct rte_crypto_sym_xform *crypto;\n+\tstruct cnxk_eth_sec_sess *eth_sec;\n+\tbool inbound;\n+\tint rc = 0;\n+\n+\tif (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)\n+\t\treturn -ENOTSUP;\n+\n+\tif (conf->protocol != RTE_SECURITY_PROTOCOL_IPSEC)\n+\t\treturn -ENOTSUP;\n+\n+\tif (rte_security_dynfield_register() < 0)\n+\t\treturn -ENOTSUP;\n+\n+\tipsec = &conf->ipsec;\n+\tcrypto = conf->crypto_xform;\n+\tinbound = !!(ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS);\n+\n+\t/* Search if a session already exists */\n+\tif (cnxk_eth_sec_sess_get_by_spi(dev, ipsec->spi, inbound)) {\n+\t\tplt_err(\"%s SA with SPI %u already in use\",\n+\t\t\tinbound ? \"Inbound\" : \"Outbound\", ipsec->spi);\n+\t\treturn -EEXIST;\n+\t}\n+\n+\tif (rte_mempool_get(mempool, (void **)&eth_sec)) {\n+\t\tplt_err(\"Could not allocate security session private data\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tmemset(eth_sec, 0, sizeof(struct cnxk_eth_sec_sess));\n+\tsess_priv.u64 = 0;\n+\n+\tif (inbound) {\n+\t\tstruct cn9k_inb_priv_data *inb_priv;\n+\t\tstruct roc_onf_ipsec_inb_sa *inb_sa;\n+\n+\t\tPLT_STATIC_ASSERT(sizeof(struct cn9k_inb_priv_data) <\n+\t\t\t\t  ROC_NIX_INL_ONF_IPSEC_INB_SW_RSVD);\n+\n+\t\t/* Get Inbound SA from NIX_RX_IPSEC_SA_BASE. Assume no inline\n+\t\t * device always for CN9K.\n+\t\t */\n+\t\tinb_sa = (struct roc_onf_ipsec_inb_sa *)\n+\t\t\troc_nix_inl_inb_sa_get(&dev->nix, false, ipsec->spi);\n+\t\tif (!inb_sa) {\n+\t\t\tplt_err(\"Failed to create ingress sa\");\n+\t\t\trc = -EFAULT;\n+\t\t\tgoto mempool_put;\n+\t\t}\n+\n+\t\t/* Check if SA is already in use */\n+\t\tif (inb_sa->ctl.valid) {\n+\t\t\tplt_err(\"Inbound SA with SPI %u already in use\",\n+\t\t\t\tipsec->spi);\n+\t\t\trc = -EBUSY;\n+\t\t\tgoto mempool_put;\n+\t\t}\n+\n+\t\tmemset(inb_sa, 0, sizeof(struct roc_onf_ipsec_inb_sa));\n+\n+\t\t/* Fill inbound sa params */\n+\t\trc = cnxk_onf_ipsec_inb_sa_fill(inb_sa, ipsec, crypto);\n+\t\tif (rc) {\n+\t\t\tplt_err(\"Failed to init inbound sa, rc=%d\", rc);\n+\t\t\tgoto mempool_put;\n+\t\t}\n+\n+\t\tinb_priv = roc_nix_inl_onf_ipsec_inb_sa_sw_rsvd(inb_sa);\n+\t\t/* Back pointer to get eth_sec */\n+\t\tinb_priv->eth_sec = eth_sec;\n+\n+\t\t/* Save userdata in inb private area */\n+\t\tinb_priv->userdata = conf->userdata;\n+\n+\t\tsess_priv.inb_sa = 1;\n+\t\tsess_priv.sa_idx = ipsec->spi;\n+\n+\t\t/* Pointer from eth_sec -> inb_sa */\n+\t\teth_sec->sa = inb_sa;\n+\t\teth_sec->sess = sess;\n+\t\teth_sec->sa_idx = ipsec->spi;\n+\t\teth_sec->spi = ipsec->spi;\n+\t\teth_sec->inb = true;\n+\n+\t\tTAILQ_INSERT_TAIL(&dev->inb.list, eth_sec, entry);\n+\t\tdev->inb.nb_sess++;\n+\t} else {\n+\t\tstruct cn9k_outb_priv_data *outb_priv;\n+\t\tstruct roc_onf_ipsec_outb_sa *outb_sa;\n+\t\tuintptr_t sa_base = dev->outb.sa_base;\n+\t\tstruct cnxk_ipsec_outb_rlens *rlens;\n+\t\tuint32_t sa_idx;\n+\n+\t\tPLT_STATIC_ASSERT(sizeof(struct cn9k_outb_priv_data) <\n+\t\t\t\t  ROC_NIX_INL_ONF_IPSEC_OUTB_SW_RSVD);\n+\n+\t\t/* Alloc an sa index */\n+\t\trc = cnxk_eth_outb_sa_idx_get(dev, &sa_idx);\n+\t\tif (rc)\n+\t\t\tgoto mempool_put;\n+\n+\t\toutb_sa = roc_nix_inl_onf_ipsec_outb_sa(sa_base, sa_idx);\n+\t\toutb_priv = roc_nix_inl_onf_ipsec_outb_sa_sw_rsvd(outb_sa);\n+\t\trlens = &outb_priv->rlens;\n+\n+\t\tmemset(outb_sa, 0, sizeof(struct roc_onf_ipsec_outb_sa));\n+\n+\t\t/* Fill outbound sa params */\n+\t\trc = cnxk_onf_ipsec_outb_sa_fill(outb_sa, ipsec, crypto);\n+\t\tif (rc) {\n+\t\t\tplt_err(\"Failed to init outbound sa, rc=%d\", rc);\n+\t\t\trc |= cnxk_eth_outb_sa_idx_put(dev, sa_idx);\n+\t\t\tgoto mempool_put;\n+\t\t}\n+\n+\t\t/* Save userdata */\n+\t\toutb_priv->userdata = conf->userdata;\n+\t\toutb_priv->sa_idx = sa_idx;\n+\t\toutb_priv->eth_sec = eth_sec;\n+\t\t/* Start sequence number with 1 */\n+\t\toutb_priv->seq = 1;\n+\n+\t\tmemcpy(&outb_priv->nonce, outb_sa->nonce, 4);\n+\t\tif (outb_sa->ctl.enc_type == ROC_IE_ON_SA_ENC_AES_GCM)\n+\t\t\toutb_priv->copy_salt = 1;\n+\n+\t\t/* Save rlen info */\n+\t\tcnxk_ipsec_outb_rlens_get(rlens, ipsec, crypto);\n+\n+\t\tsess_priv.sa_idx = outb_priv->sa_idx;\n+\t\tsess_priv.roundup_byte = rlens->roundup_byte;\n+\t\tsess_priv.roundup_len = rlens->roundup_len;\n+\t\tsess_priv.partial_len = rlens->partial_len;\n+\n+\t\t/* Pointer from eth_sec -> outb_sa */\n+\t\teth_sec->sa = outb_sa;\n+\t\teth_sec->sess = sess;\n+\t\teth_sec->sa_idx = sa_idx;\n+\t\teth_sec->spi = ipsec->spi;\n+\n+\t\tTAILQ_INSERT_TAIL(&dev->outb.list, eth_sec, entry);\n+\t\tdev->outb.nb_sess++;\n+\t}\n+\n+\t/* Sync SA content */\n+\tplt_atomic_thread_fence(__ATOMIC_ACQ_REL);\n+\n+\tplt_nix_dbg(\"Created %s session with spi=%u, sa_idx=%u\",\n+\t\t    inbound ? \"inbound\" : \"outbound\", eth_sec->spi,\n+\t\t    eth_sec->sa_idx);\n+\t/*\n+\t * Update fast path info in priv area.\n+\t */\n+\tset_sec_session_private_data(sess, (void *)sess_priv.u64);\n+\n+\treturn 0;\n+mempool_put:\n+\trte_mempool_put(mempool, eth_sec);\n+\treturn rc;\n+}\n+\n+static int\n+cn9k_eth_sec_session_destroy(void *device, struct rte_security_session *sess)\n+{\n+\tstruct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;\n+\tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n+\tstruct roc_onf_ipsec_outb_sa *outb_sa;\n+\tstruct roc_onf_ipsec_inb_sa *inb_sa;\n+\tstruct cnxk_eth_sec_sess *eth_sec;\n+\tstruct rte_mempool *mp;\n+\n+\teth_sec = cnxk_eth_sec_sess_get_by_sess(dev, sess);\n+\tif (!eth_sec)\n+\t\treturn -ENOENT;\n+\n+\tif (eth_sec->inb) {\n+\t\tinb_sa = eth_sec->sa;\n+\t\t/* Disable SA */\n+\t\tinb_sa->ctl.valid = 0;\n+\n+\t\tTAILQ_REMOVE(&dev->inb.list, eth_sec, entry);\n+\t\tdev->inb.nb_sess--;\n+\t} else {\n+\t\toutb_sa = eth_sec->sa;\n+\t\t/* Disable SA */\n+\t\toutb_sa->ctl.valid = 0;\n+\n+\t\t/* Release Outbound SA index */\n+\t\tcnxk_eth_outb_sa_idx_put(dev, eth_sec->sa_idx);\n+\t\tTAILQ_REMOVE(&dev->outb.list, eth_sec, entry);\n+\t\tdev->outb.nb_sess--;\n+\t}\n+\n+\t/* Sync SA content */\n+\tplt_atomic_thread_fence(__ATOMIC_ACQ_REL);\n+\n+\tplt_nix_dbg(\"Destroyed %s session with spi=%u, sa_idx=%u\",\n+\t\t    eth_sec->inb ? \"inbound\" : \"outbound\", eth_sec->spi,\n+\t\t    eth_sec->sa_idx);\n+\n+\t/* Put eth_sec object back to pool */\n+\tmp = rte_mempool_from_obj(eth_sec);\n+\tset_sec_session_private_data(sess, NULL);\n+\trte_mempool_put(mp, eth_sec);\n+\treturn 0;\n+}\n+\n+static const struct rte_security_capability *\n+cn9k_eth_sec_capabilities_get(void *device __rte_unused)\n+{\n+\treturn cn9k_eth_sec_capabilities;\n+}\n+\n+void\n+cn9k_eth_sec_ops_override(void)\n+{\n+\tstatic int init_once;\n+\n+\tif (init_once)\n+\t\treturn;\n+\tinit_once = 1;\n+\n+\t/* Update platform specific ops */\n+\tcnxk_eth_sec_ops.session_create = cn9k_eth_sec_session_create;\n+\tcnxk_eth_sec_ops.session_destroy = cn9k_eth_sec_session_destroy;\n+\tcnxk_eth_sec_ops.capabilities_get = cn9k_eth_sec_capabilities_get;\n+}\ndiff --git a/drivers/net/cnxk/cn9k_rx.h b/drivers/net/cnxk/cn9k_rx.h\nindex a3bf4e0..59545af 100644\n--- a/drivers/net/cnxk/cn9k_rx.h\n+++ b/drivers/net/cnxk/cn9k_rx.h\n@@ -17,6 +17,7 @@\n #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(3)\n #define NIX_RX_OFFLOAD_TSTAMP_F\t     BIT(4)\n #define NIX_RX_OFFLOAD_VLAN_STRIP_F  BIT(5)\n+#define NIX_RX_OFFLOAD_SECURITY_F    BIT(6)\n \n /* Flags to control cqe_to_mbuf conversion function.\n  * Defining it from backwards to denote its been\ndiff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h\nindex ed65cd3..a27ff76 100644\n--- a/drivers/net/cnxk/cn9k_tx.h\n+++ b/drivers/net/cnxk/cn9k_tx.h\n@@ -13,6 +13,7 @@\n #define NIX_TX_OFFLOAD_MBUF_NOFF_F    BIT(3)\n #define NIX_TX_OFFLOAD_TSO_F\t      BIT(4)\n #define NIX_TX_OFFLOAD_TSTAMP_F\t      BIT(5)\n+#define NIX_TX_OFFLOAD_SECURITY_F     BIT(6)\n \n /* Flags to control xmit_prepare function.\n  * Defining it from backwards to denote its been\ndiff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c\nindex 0e3652e..60a4df5 100644\n--- a/drivers/net/cnxk/cnxk_ethdev.c\n+++ b/drivers/net/cnxk/cnxk_ethdev.c\n@@ -38,6 +38,159 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)\n \treturn speed_capa;\n }\n \n+int\n+cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev)\n+{\n+\tstruct roc_nix *nix = &dev->nix;\n+\n+\tif (dev->inb.inl_dev == use_inl_dev)\n+\t\treturn 0;\n+\n+\tplt_nix_dbg(\"Security sessions(%u) still active, inl=%u!!!\",\n+\t\t    dev->inb.nb_sess, !!dev->inb.inl_dev);\n+\n+\t/* Change the mode */\n+\tdev->inb.inl_dev = use_inl_dev;\n+\n+\t/* Update RoC for NPC rule insertion */\n+\troc_nix_inb_mode_set(nix, use_inl_dev);\n+\n+\t/* Setup lookup mem */\n+\treturn cnxk_nix_lookup_mem_sa_base_set(dev);\n+}\n+\n+static int\n+nix_security_setup(struct cnxk_eth_dev *dev)\n+{\n+\tstruct roc_nix *nix = &dev->nix;\n+\tint i, rc = 0;\n+\n+\tif (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {\n+\t\t/* Setup Inline Inbound */\n+\t\trc = roc_nix_inl_inb_init(nix);\n+\t\tif (rc) {\n+\t\t\tplt_err(\"Failed to initialize nix inline inb, rc=%d\",\n+\t\t\t\trc);\n+\t\t\treturn rc;\n+\t\t}\n+\n+\t\t/* By default pick using inline device for poll mode.\n+\t\t * Will be overridden when event mode rq's are setup.\n+\t\t */\n+\t\tcnxk_nix_inb_mode_set(dev, true);\n+\t}\n+\n+\t/* Setup Inline outbound */\n+\tif (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) {\n+\t\tstruct plt_bitmap *bmap;\n+\t\tsize_t bmap_sz;\n+\t\tvoid *mem;\n+\n+\t\t/* Cannot ask for Tx Inline without SAs */\n+\t\tif (!dev->outb.max_sa)\n+\t\t\treturn -EINVAL;\n+\n+\t\t/* Setup enough descriptors for all tx queues */\n+\t\tnix->outb_nb_desc = dev->outb.nb_desc;\n+\t\tnix->outb_nb_crypto_qs = dev->outb.nb_crypto_qs;\n+\n+\t\t/* Setup Inline Outbound */\n+\t\trc = roc_nix_inl_outb_init(nix);\n+\t\tif (rc) {\n+\t\t\tplt_err(\"Failed to initialize nix inline outb, rc=%d\",\n+\t\t\t\trc);\n+\t\t\tgoto cleanup;\n+\t\t}\n+\n+\t\trc = -ENOMEM;\n+\t\t/* Allocate a bitmap to alloc and free sa indexes */\n+\t\tbmap_sz = plt_bitmap_get_memory_footprint(dev->outb.max_sa);\n+\t\tmem = plt_zmalloc(bmap_sz, PLT_CACHE_LINE_SIZE);\n+\t\tif (mem == NULL) {\n+\t\t\tplt_err(\"Outbound SA bmap alloc failed\");\n+\n+\t\t\trc |= roc_nix_inl_outb_fini(nix);\n+\t\t\tgoto cleanup;\n+\t\t}\n+\n+\t\trc = -EIO;\n+\t\tbmap = plt_bitmap_init(dev->outb.max_sa, mem, bmap_sz);\n+\t\tif (!bmap) {\n+\t\t\tplt_err(\"Outbound SA bmap init failed\");\n+\n+\t\t\trc |= roc_nix_inl_outb_fini(nix);\n+\t\t\tplt_free(mem);\n+\t\t\tgoto cleanup;\n+\t\t}\n+\n+\t\tfor (i = 0; i < dev->outb.max_sa; i++)\n+\t\t\tplt_bitmap_set(bmap, i);\n+\n+\t\tdev->outb.sa_base = roc_nix_inl_outb_sa_base_get(nix);\n+\t\tdev->outb.sa_bmap_mem = mem;\n+\t\tdev->outb.sa_bmap = bmap;\n+\t\tdev->outb.lf_base = roc_nix_inl_outb_lf_base_get(nix);\n+\t}\n+\n+\treturn 0;\n+cleanup:\n+\tif (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)\n+\t\trc |= roc_nix_inl_inb_fini(nix);\n+\treturn rc;\n+}\n+\n+static int\n+nix_security_release(struct cnxk_eth_dev *dev)\n+{\n+\tstruct rte_eth_dev *eth_dev = dev->eth_dev;\n+\tstruct cnxk_eth_sec_sess *eth_sec, *tvar;\n+\tstruct roc_nix *nix = &dev->nix;\n+\tint rc, ret = 0;\n+\n+\t/* Cleanup Inline inbound */\n+\tif (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {\n+\t\t/* Destroy inbound sessions */\n+\t\ttvar = NULL;\n+\t\tTAILQ_FOREACH_SAFE(eth_sec, &dev->inb.list, entry, tvar)\n+\t\t\tcnxk_eth_sec_ops.session_destroy(eth_dev,\n+\t\t\t\t\t\t\t eth_sec->sess);\n+\n+\t\t/* Clear lookup mem */\n+\t\tcnxk_nix_lookup_mem_sa_base_clear(dev);\n+\n+\t\trc = roc_nix_inl_inb_fini(nix);\n+\t\tif (rc)\n+\t\t\tplt_err(\"Failed to cleanup nix inline inb, rc=%d\", rc);\n+\t\tret |= rc;\n+\t}\n+\n+\t/* Cleanup Inline outbound */\n+\tif (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) {\n+\t\t/* Destroy outbound sessions */\n+\t\ttvar = NULL;\n+\t\tTAILQ_FOREACH_SAFE(eth_sec, &dev->outb.list, entry, tvar)\n+\t\t\tcnxk_eth_sec_ops.session_destroy(eth_dev,\n+\t\t\t\t\t\t\t eth_sec->sess);\n+\n+\t\trc = roc_nix_inl_outb_fini(nix);\n+\t\tif (rc)\n+\t\t\tplt_err(\"Failed to cleanup nix inline outb, rc=%d\", rc);\n+\t\tret |= rc;\n+\n+\t\tplt_bitmap_free(dev->outb.sa_bmap);\n+\t\tplt_free(dev->outb.sa_bmap_mem);\n+\t\tdev->outb.sa_bmap = NULL;\n+\t\tdev->outb.sa_bmap_mem = NULL;\n+\t}\n+\n+\tdev->inb.inl_dev = false;\n+\troc_nix_inb_mode_set(nix, false);\n+\tdev->nb_rxq_sso = 0;\n+\tdev->inb.nb_sess = 0;\n+\tdev->outb.nb_sess = 0;\n+\treturn ret;\n+}\n+\n static void\n nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)\n {\n@@ -194,6 +347,12 @@ cnxk_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \t\teth_dev->data->tx_queues[qid] = NULL;\n \t}\n \n+\t/* When Tx Security offload is enabled, increase tx desc count by\n+\t * max possible outbound desc count.\n+\t */\n+\tif (dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY)\n+\t\tnb_desc += dev->outb.nb_desc;\n+\n \t/* Setup ROC SQ */\n \tsq = &dev->sqs[qid];\n \tsq->qid = qid;\n@@ -266,6 +425,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \t\t\tstruct rte_mempool *mp)\n {\n \tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n+\tstruct roc_nix *nix = &dev->nix;\n \tstruct cnxk_eth_rxq_sp *rxq_sp;\n \tstruct rte_mempool_ops *ops;\n \tconst char *platform_ops;\n@@ -328,6 +488,10 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \trq->later_skip = sizeof(struct rte_mbuf);\n \trq->lpb_size = mp->elt_size;\n \n+\t/* Enable Inline IPSec on RQ, will not be used for Poll mode */\n+\tif (roc_nix_inl_inb_is_enabled(nix))\n+\t\trq->ipsech_ena = true;\n+\n \trc = roc_nix_rq_init(&dev->nix, rq, !!eth_dev->data->dev_started);\n \tif (rc) {\n \t\tplt_err(\"Failed to init roc rq for rq=%d, rc=%d\", qid, rc);\n@@ -350,6 +514,13 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \trxq_sp->qconf.nb_desc = nb_desc;\n \trxq_sp->qconf.mp = mp;\n \n+\tif (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {\n+\t\t/* Setup rq reference for inline dev if present */\n+\t\trc = roc_nix_inl_dev_rq_get(rq);\n+\t\tif (rc)\n+\t\t\tgoto free_mem;\n+\t}\n+\n \tplt_nix_dbg(\"rq=%d pool=%s nb_desc=%d->%d\", qid, mp->name, nb_desc,\n \t\t    cq->nb_desc);\n \n@@ -370,6 +541,8 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,\n \t}\n \n \treturn 0;\n+free_mem:\n+\tplt_free(rxq_sp);\n rq_fini:\n \trc |= roc_nix_rq_fini(rq);\n cq_fini:\n@@ -394,11 +567,15 @@ cnxk_nix_rx_queue_release(void *rxq)\n \trxq_sp = cnxk_eth_rxq_to_sp(rxq);\n \tdev = rxq_sp->dev;\n \tqid = rxq_sp->qid;\n+\trq = &dev->rqs[qid];\n \n \tplt_nix_dbg(\"Releasing rxq %u\", qid);\n \n+\t/* Release rq reference for inline dev if present */\n+\tif (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)\n+\t\troc_nix_inl_dev_rq_put(rq);\n+\n \t/* Cleanup ROC RQ */\n-\trq = &dev->rqs[qid];\n \trc = roc_nix_rq_fini(rq);\n \tif (rc)\n \t\tplt_err(\"Failed to cleanup rq, rc=%d\", rc);\n@@ -804,6 +981,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)\n \t\trc = nix_store_queue_cfg_and_then_release(eth_dev);\n \t\tif (rc)\n \t\t\tgoto fail_configure;\n+\n+\t\t/* Cleanup security support */\n+\t\trc = nix_security_release(dev);\n+\t\tif (rc)\n+\t\t\tgoto fail_configure;\n+\n \t\troc_nix_tm_fini(nix);\n \t\troc_nix_lf_free(nix);\n \t}\n@@ -958,6 +1141,12 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)\n \t\tplt_err(\"Failed to initialize flow control rc=%d\", rc);\n \t\tgoto cq_fini;\n \t}\n+\n+\t/* Setup Inline security support */\n+\trc = nix_security_setup(dev);\n+\tif (rc)\n+\t\tgoto cq_fini;\n+\n \t/*\n \t * Restore queue config when reconfigure followed by\n \t * reconfigure and no queue configure invoked from application case.\n@@ -965,7 +1154,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)\n \tif (dev->configured == 1) {\n \t\trc = nix_restore_queue_cfg(eth_dev);\n \t\tif (rc)\n-\t\t\tgoto cq_fini;\n+\t\t\tgoto sec_release;\n \t}\n \n \t/* Update the mac address */\n@@ -987,6 +1176,8 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)\n \tdev->nb_txq = data->nb_tx_queues;\n \treturn 0;\n \n+sec_release:\n+\trc |= nix_security_release(dev);\n cq_fini:\n \troc_nix_unregister_cq_irqs(nix);\n q_irq_fini:\n@@ -1282,12 +1473,25 @@ static int\n cnxk_eth_dev_init(struct rte_eth_dev *eth_dev)\n {\n \tstruct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);\n+\tstruct rte_security_ctx *sec_ctx;\n \tstruct roc_nix *nix = &dev->nix;\n \tstruct rte_pci_device *pci_dev;\n \tint rc, max_entries;\n \n \teth_dev->dev_ops = &cnxk_eth_dev_ops;\n \n+\t/* Alloc security context */\n+\tsec_ctx = plt_zmalloc(sizeof(struct rte_security_ctx), 0);\n+\tif (!sec_ctx)\n+\t\treturn -ENOMEM;\n+\tsec_ctx->device = eth_dev;\n+\tsec_ctx->ops = &cnxk_eth_sec_ops;\n+\tsec_ctx->flags =\n+\t\t(RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);\n+\teth_dev->security_ctx = sec_ctx;\n+\tTAILQ_INIT(&dev->inb.list);\n+\tTAILQ_INIT(&dev->outb.list);\n+\n \t/* For secondary processes, the primary has done all the work */\n \tif (rte_eal_process_type() != RTE_PROC_PRIMARY)\n \t\treturn 0;\n@@ -1400,6 +1604,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)\n \tstruct roc_nix *nix = &dev->nix;\n \tint rc, i;\n \n+\tplt_free(eth_dev->security_ctx);\n+\teth_dev->security_ctx = NULL;\n+\n \t/* Nothing to be done for secondary processes */\n \tif (rte_eal_process_type() != RTE_PROC_PRIMARY)\n \t\treturn 0;\n@@ -1429,6 +1636,9 @@ cnxk_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool reset)\n \t}\n \teth_dev->data->nb_rx_queues = 0;\n \n+\t/* Free security resources */\n+\tnix_security_release(dev);\n+\n \t/* Free tm resources */\n \troc_nix_tm_fini(nix);\n \ndiff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h\nindex 2528b3c..5ae791f 100644\n--- a/drivers/net/cnxk/cnxk_ethdev.h\n+++ b/drivers/net/cnxk/cnxk_ethdev.h\n@@ -13,6 +13,9 @@\n #include <rte_mbuf.h>\n #include <rte_mbuf_pool_ops.h>\n #include <rte_mempool.h>\n+#include <rte_security.h>\n+#include <rte_security_driver.h>\n+#include <rte_tailq.h>\n #include <rte_time.h>\n \n #include \"roc_api.h\"\n@@ -70,14 +73,14 @@\n \t DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \\\n \t DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \\\n \t DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \\\n-\t DEV_TX_OFFLOAD_IPV4_CKSUM)\n+\t DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_SECURITY)\n \n #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \\\n \t(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \\\n \t DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \\\n \t DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |         \\\n \t DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |                  \\\n-\t DEV_RX_OFFLOAD_VLAN_STRIP)\n+\t DEV_RX_OFFLOAD_VLAN_STRIP | DEV_RX_OFFLOAD_SECURITY)\n \n #define RSS_IPV4_ENABLE                                                        \\\n \t(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \\\n@@ -112,6 +115,11 @@\n #define PTYPE_TUNNEL_ARRAY_SZ\t  BIT(PTYPE_TUNNEL_WIDTH)\n #define PTYPE_ARRAY_SZ                                                         \\\n \t((PTYPE_NON_TUNNEL_ARRAY_SZ + PTYPE_TUNNEL_ARRAY_SZ) * sizeof(uint16_t))\n+\n+/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */\n+#define ERRCODE_ERRLEN_WIDTH 12\n+#define ERR_ARRAY_SZ\t     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))\n+\n /* Fastpath lookup */\n #define CNXK_NIX_FASTPATH_LOOKUP_MEM \"cnxk_nix_fastpath_lookup_mem\"\n \n@@ -119,6 +127,9 @@\n \t((1ull << (PKT_TX_TUNNEL_VXLAN >> 45)) |                               \\\n \t (1ull << (PKT_TX_TUNNEL_GENEVE >> 45)))\n \n+/* Subtype from inline outbound error event */\n+#define CNXK_ETHDEV_SEC_OUTB_EV_SUB 0xFFUL\n+\n struct cnxk_fc_cfg {\n \tenum rte_eth_fc_mode mode;\n \tuint8_t rx_pause;\n@@ -144,6 +155,82 @@ struct cnxk_timesync_info {\n \tuint64_t *tx_tstamp;\n } __plt_cache_aligned;\n \n+/* Security session private data */\n+struct cnxk_eth_sec_sess {\n+\t/* List entry */\n+\tTAILQ_ENTRY(cnxk_eth_sec_sess) entry;\n+\n+\t/* Inbound SA is from NIX_RX_IPSEC_SA_BASE or\n+\t * Outbound SA from roc_nix_inl_outb_sa_base_get()\n+\t */\n+\tvoid *sa;\n+\n+\t/* SA index */\n+\tuint32_t sa_idx;\n+\n+\t/* SPI */\n+\tuint32_t spi;\n+\n+\t/* Back pointer to session */\n+\tstruct rte_security_session *sess;\n+\n+\t/* Inbound */\n+\tbool inb;\n+\n+\t/* Inbound session on inl dev */\n+\tbool inl_dev;\n+};\n+\n+TAILQ_HEAD(cnxk_eth_sec_sess_list, cnxk_eth_sec_sess);\n+\n+/* Inbound security data */\n+struct cnxk_eth_dev_sec_inb {\n+\t/* IPSec inbound max SPI */\n+\tuint16_t max_spi;\n+\n+\t/* Using inbound with inline device */\n+\tbool inl_dev;\n+\n+\t/* Device argument to force inline device for inb */\n+\tbool force_inl_dev;\n+\n+\t/* Active sessions */\n+\tuint16_t nb_sess;\n+\n+\t/* List of sessions */\n+\tstruct cnxk_eth_sec_sess_list list;\n+};\n+\n+/* Outbound security data */\n+struct cnxk_eth_dev_sec_outb {\n+\t/* IPSec outbound max SA */\n+\tuint16_t max_sa;\n+\n+\t/* Per CPT LF descriptor count */\n+\tuint32_t nb_desc;\n+\n+\t/* SA Bitmap */\n+\tstruct plt_bitmap *sa_bmap;\n+\n+\t/* SA bitmap memory */\n+\tvoid *sa_bmap_mem;\n+\n+\t/* SA base */\n+\tuint64_t sa_base;\n+\n+\t/* CPT LF base */\n+\tstruct roc_cpt_lf *lf_base;\n+\n+\t/* Crypto queues => CPT lf count */\n+\tuint16_t nb_crypto_qs;\n+\n+\t/* Active sessions */\n+\tuint16_t nb_sess;\n+\n+\t/* List of sessions */\n+\tstruct cnxk_eth_sec_sess_list list;\n+};\n+\n struct cnxk_eth_dev {\n \t/* ROC NIX */\n \tstruct roc_nix nix;\n@@ -159,6 +246,7 @@ struct cnxk_eth_dev {\n \t/* Configured queue count */\n \tuint16_t nb_rxq;\n \tuint16_t nb_txq;\n+\tuint16_t nb_rxq_sso;\n \tuint8_t configured;\n \n \t/* Max macfilter entries */\n@@ -223,6 +311,10 @@ struct cnxk_eth_dev {\n \t/* Per queue statistics counters */\n \tuint32_t txq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];\n \tuint32_t rxq_stat_map[RTE_ETHDEV_QUEUE_STAT_CNTRS];\n+\n+\t/* Security data */\n+\tstruct cnxk_eth_dev_sec_inb inb;\n+\tstruct cnxk_eth_dev_sec_outb outb;\n };\n \n struct cnxk_eth_rxq_sp {\n@@ -261,6 +353,9 @@ extern struct eth_dev_ops cnxk_eth_dev_ops;\n /* Common flow ops */\n extern struct rte_flow_ops cnxk_flow_ops;\n \n+/* Common security ops */\n+extern struct rte_security_ops cnxk_eth_sec_ops;\n+\n /* Ops */\n int cnxk_nix_probe(struct rte_pci_driver *pci_drv,\n \t\t   struct rte_pci_device *pci_dev);\n@@ -383,6 +478,18 @@ int cnxk_ethdev_parse_devargs(struct rte_devargs *devargs,\n /* Debug */\n int cnxk_nix_dev_get_reg(struct rte_eth_dev *eth_dev,\n \t\t\t struct rte_dev_reg_info *regs);\n+/* Security */\n+int cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p);\n+int cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx);\n+int cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev);\n+int cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev);\n+__rte_internal\n+int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);\n+struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,\n+\t\t\t\t\t\t       uint32_t spi, bool inb);\n+struct cnxk_eth_sec_sess *\n+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,\n+\t\t\t      struct rte_security_session *sess);\n \n /* Other private functions */\n int nix_recalc_mtu(struct rte_eth_dev *eth_dev);\n@@ -493,4 +600,14 @@ cnxk_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,\n \t}\n }\n \n+static __rte_always_inline uintptr_t\n+cnxk_nix_sa_base_get(uint16_t port, const void *lookup_mem)\n+{\n+\tuintptr_t sa_base_tbl;\n+\n+\tsa_base_tbl = (uintptr_t)lookup_mem;\n+\tsa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;\n+\treturn *((const uintptr_t *)sa_base_tbl + port);\n+}\n+\n #endif /* __CNXK_ETHDEV_H__ */\ndiff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c\nindex 37720fb..c0b949e 100644\n--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c\n+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c\n@@ -8,6 +8,61 @@\n #include \"cnxk_ethdev.h\"\n \n static int\n+parse_outb_nb_desc(const char *key, const char *value, void *extra_args)\n+{\n+\tRTE_SET_USED(key);\n+\tuint32_t val;\n+\n+\tval = atoi(value);\n+\n+\t*(uint16_t *)extra_args = val;\n+\n+\treturn 0;\n+}\n+\n+static int\n+parse_outb_nb_crypto_qs(const char *key, const char *value, void *extra_args)\n+{\n+\tRTE_SET_USED(key);\n+\tuint32_t val;\n+\n+\tval = atoi(value);\n+\n+\tif (val < 1 || val > 64)\n+\t\treturn -EINVAL;\n+\n+\t*(uint16_t *)extra_args = val;\n+\n+\treturn 0;\n+}\n+\n+static int\n+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)\n+{\n+\tRTE_SET_USED(key);\n+\tuint32_t val;\n+\n+\tval = atoi(value);\n+\n+\t*(uint16_t *)extra_args = val;\n+\n+\treturn 0;\n+}\n+\n+static int\n+parse_ipsec_out_max_sa(const char *key, const char *value, void *extra_args)\n+{\n+\tRTE_SET_USED(key);\n+\tuint32_t val;\n+\n+\tval = atoi(value);\n+\n+\t*(uint16_t *)extra_args = val;\n+\n+\treturn 0;\n+}\n+\n+static int\n parse_flow_max_priority(const char *key, const char *value, void *extra_args)\n {\n \tRTE_SET_USED(key);\n@@ -117,15 +172,25 @@ parse_switch_header_type(const char *key, const char *value, void *extra_args)\n #define CNXK_SWITCH_HEADER_TYPE \"switch_header\"\n #define CNXK_RSS_TAG_AS_XOR\t\"tag_as_xor\"\n #define CNXK_LOCK_RX_CTX\t\"lock_rx_ctx\"\n+#define CNXK_IPSEC_IN_MAX_SPI\t\"ipsec_in_max_spi\"\n+#define CNXK_IPSEC_OUT_MAX_SA\t\"ipsec_out_max_sa\"\n+#define CNXK_OUTB_NB_DESC\t\"outb_nb_desc\"\n+#define CNXK_FORCE_INB_INL_DEV\t\"force_inb_inl_dev\"\n+#define CNXK_OUTB_NB_CRYPTO_QS\t\"outb_nb_crypto_qs\"\n \n int\n cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)\n {\n \tuint16_t reta_sz = ROC_NIX_RSS_RETA_SZ_64;\n \tuint16_t sqb_count = CNXK_NIX_TX_MAX_SQB;\n+\tuint16_t ipsec_in_max_spi = BIT(8) - 1;\n+\tuint16_t ipsec_out_max_sa = BIT(12);\n \tuint16_t flow_prealloc_size = 1;\n \tuint16_t switch_header_type = 0;\n \tuint16_t flow_max_priority = 3;\n+\tuint16_t force_inb_inl_dev = 0;\n+\tuint16_t outb_nb_crypto_qs = 1;\n+\tuint16_t outb_nb_desc = 8200;\n \tuint16_t rss_tag_as_xor = 0;\n \tuint16_t scalar_enable = 0;\n \tuint8_t lock_rx_ctx = 0;\n@@ -153,10 +218,27 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)\n \trte_kvargs_process(kvlist, CNXK_RSS_TAG_AS_XOR, &parse_flag,\n \t\t\t   &rss_tag_as_xor);\n \trte_kvargs_process(kvlist, CNXK_LOCK_RX_CTX, &parse_flag, &lock_rx_ctx);\n+\trte_kvargs_process(kvlist, CNXK_IPSEC_IN_MAX_SPI,\n+\t\t\t   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);\n+\trte_kvargs_process(kvlist, CNXK_IPSEC_OUT_MAX_SA,\n+\t\t\t   &parse_ipsec_out_max_sa, &ipsec_out_max_sa);\n+\trte_kvargs_process(kvlist, CNXK_OUTB_NB_DESC, &parse_outb_nb_desc,\n+\t\t\t   &outb_nb_desc);\n+\trte_kvargs_process(kvlist, CNXK_OUTB_NB_CRYPTO_QS,\n+\t\t\t   &parse_outb_nb_crypto_qs, &outb_nb_crypto_qs);\n+\trte_kvargs_process(kvlist, CNXK_FORCE_INB_INL_DEV, &parse_flag,\n+\t\t\t   &force_inb_inl_dev);\n \trte_kvargs_free(kvlist);\n \n null_devargs:\n \tdev->scalar_ena = !!scalar_enable;\n+\tdev->inb.force_inl_dev = !!force_inb_inl_dev;\n+\tdev->inb.max_spi = ipsec_in_max_spi;\n+\tdev->outb.max_sa = ipsec_out_max_sa;\n+\tdev->outb.nb_desc = outb_nb_desc;\n+\tdev->outb.nb_crypto_qs = outb_nb_crypto_qs;\n+\tdev->nix.ipsec_in_max_spi = ipsec_in_max_spi;\n+\tdev->nix.ipsec_out_max_sa = ipsec_out_max_sa;\n \tdev->nix.rss_tag_as_xor = !!rss_tag_as_xor;\n \tdev->nix.max_sqb_count = sqb_count;\n \tdev->nix.reta_sz = reta_sz;\n@@ -177,4 +259,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_cnxk,\n \t\t\t      CNXK_FLOW_PREALLOC_SIZE \"=<1-32>\"\n \t\t\t      CNXK_FLOW_MAX_PRIORITY \"=<1-32>\"\n \t\t\t      CNXK_SWITCH_HEADER_TYPE \"=<higig2|dsa|chlen90b>\"\n-\t\t\t      CNXK_RSS_TAG_AS_XOR \"=1\");\n+\t\t\t      CNXK_RSS_TAG_AS_XOR \"=1\"\n+\t\t\t      CNXK_IPSEC_IN_MAX_SPI \"=<1-65535>\"\n+\t\t\t      CNXK_OUTB_NB_DESC \"=<1-65535>\"\n+\t\t\t      CNXK_OUTB_NB_CRYPTO_QS \"=<1-64>\"\n+\t\t\t      CNXK_FORCE_INB_INL_DEV \"=1\");\ndiff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c\nnew file mode 100644\nindex 0000000..c002c30\n--- /dev/null\n+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c\n@@ -0,0 +1,278 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright(C) 2021 Marvell.\n+ */\n+\n+#include <cnxk_ethdev.h>\n+\n+#define CNXK_NIX_INL_SELFTEST\t      \"selftest\"\n+#define CNXK_NIX_INL_IPSEC_IN_MAX_SPI \"ipsec_in_max_spi\"\n+\n+#define CNXK_NIX_INL_DEV_NAME RTE_STR(cnxk_nix_inl_dev_)\n+#define CNXK_NIX_INL_DEV_NAME_LEN                                              \\\n+\t(sizeof(CNXK_NIX_INL_DEV_NAME) + PCI_PRI_STR_SIZE)\n+\n+static inline int\n+bitmap_ctzll(uint64_t slab)\n+{\n+\tif (slab == 0)\n+\t\treturn 0;\n+\n+\treturn __builtin_ctzll(slab);\n+}\n+\n+int\n+cnxk_eth_outb_sa_idx_get(struct cnxk_eth_dev *dev, uint32_t *idx_p)\n+{\n+\tuint32_t pos, idx;\n+\tuint64_t slab;\n+\tint rc;\n+\n+\tif (!dev->outb.max_sa)\n+\t\treturn -ENOTSUP;\n+\n+\tpos = 0;\n+\tslab = 0;\n+\t/* Scan from the beginning */\n+\tplt_bitmap_scan_init(dev->outb.sa_bmap);\n+\t/* Scan bitmap to get the free sa index */\n+\trc = plt_bitmap_scan(dev->outb.sa_bmap, &pos, &slab);\n+\t/* Empty bitmap */\n+\tif (rc == 0) {\n+\t\tplt_err(\"Outbound SA' exhausted, use 'ipsec_out_max_sa' \"\n+\t\t\t\"devargs to increase\");\n+\t\treturn -ERANGE;\n+\t}\n+\n+\t/* Get free SA index */\n+\tidx = pos + bitmap_ctzll(slab);\n+\tplt_bitmap_clear(dev->outb.sa_bmap, idx);\n+\t*idx_p = idx;\n+\treturn 0;\n+}\n+\n+int\n+cnxk_eth_outb_sa_idx_put(struct cnxk_eth_dev *dev, uint32_t idx)\n+{\n+\tif (idx >= dev->outb.max_sa)\n+\t\treturn -EINVAL;\n+\n+\t/* Check if it is already free */\n+\tif (plt_bitmap_get(dev->outb.sa_bmap, idx))\n+\t\treturn -EINVAL;\n+\n+\t/* Mark index as free */\n+\tplt_bitmap_set(dev->outb.sa_bmap, idx);\n+\treturn 0;\n+}\n+\n+struct cnxk_eth_sec_sess *\n+cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev, uint32_t spi, bool inb)\n+{\n+\tstruct cnxk_eth_sec_sess_list *list;\n+\tstruct cnxk_eth_sec_sess *eth_sec;\n+\n+\tlist = inb ? &dev->inb.list : &dev->outb.list;\n+\tTAILQ_FOREACH(eth_sec, list, entry) {\n+\t\tif (eth_sec->spi == spi)\n+\t\t\treturn eth_sec;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+struct cnxk_eth_sec_sess *\n+cnxk_eth_sec_sess_get_by_sess(struct cnxk_eth_dev *dev,\n+\t\t\t      struct rte_security_session *sess)\n+{\n+\tstruct cnxk_eth_sec_sess *eth_sec = NULL;\n+\n+\t/* Search in inbound list */\n+\tTAILQ_FOREACH(eth_sec, &dev->inb.list, entry) {\n+\t\tif (eth_sec->sess == sess)\n+\t\t\treturn eth_sec;\n+\t}\n+\n+\t/* Search in outbound list */\n+\tTAILQ_FOREACH(eth_sec, &dev->outb.list, entry) {\n+\t\tif (eth_sec->sess == sess)\n+\t\t\treturn eth_sec;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static unsigned int\n+cnxk_eth_sec_session_get_size(void *device __rte_unused)\n+{\n+\treturn sizeof(struct cnxk_eth_sec_sess);\n+}\n+\n+struct rte_security_ops cnxk_eth_sec_ops = {\n+\t.session_get_size = cnxk_eth_sec_session_get_size\n+};\n+\n+static int\n+parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)\n+{\n+\tRTE_SET_USED(key);\n+\tuint32_t val;\n+\n+\tval = atoi(value);\n+\n+\t*(uint16_t *)extra_args = val;\n+\n+\treturn 0;\n+}\n+\n+static int\n+parse_selftest(const char *key, const char *value, void *extra_args)\n+{\n+\tRTE_SET_USED(key);\n+\tuint32_t val;\n+\n+\tval = atoi(value);\n+\n+\t*(uint8_t *)extra_args = !!(val == 1);\n+\treturn 0;\n+}\n+\n+static int\n+nix_inl_parse_devargs(struct rte_devargs *devargs,\n+\t\t      struct roc_nix_inl_dev *inl_dev)\n+{\n+\tuint32_t ipsec_in_max_spi = BIT(8) - 1;\n+\tstruct rte_kvargs *kvlist;\n+\tuint8_t selftest = 0;\n+\n+\tif (devargs == NULL)\n+\t\tgoto null_devargs;\n+\n+\tkvlist = rte_kvargs_parse(devargs->args, NULL);\n+\tif (kvlist == NULL)\n+\t\tgoto exit;\n+\n+\trte_kvargs_process(kvlist, CNXK_NIX_INL_SELFTEST, &parse_selftest,\n+\t\t\t   &selftest);\n+\trte_kvargs_process(kvlist, CNXK_NIX_INL_IPSEC_IN_MAX_SPI,\n+\t\t\t   &parse_ipsec_in_max_spi, &ipsec_in_max_spi);\n+\trte_kvargs_free(kvlist);\n+\n+null_devargs:\n+\tinl_dev->ipsec_in_max_spi = ipsec_in_max_spi;\n+\tinl_dev->selftest = selftest;\n+\treturn 0;\n+exit:\n+\treturn -EINVAL;\n+}\n+\n+static inline char *\n+nix_inl_dev_to_name(struct rte_pci_device *pci_dev, char *name)\n+{\n+\tsnprintf(name, CNXK_NIX_INL_DEV_NAME_LEN,\n+\t\t CNXK_NIX_INL_DEV_NAME PCI_PRI_FMT, pci_dev->addr.domain,\n+\t\t pci_dev->addr.bus, pci_dev->addr.devid,\n+\t\t pci_dev->addr.function);\n+\n+\treturn name;\n+}\n+\n+static int\n+cnxk_nix_inl_dev_remove(struct rte_pci_device *pci_dev)\n+{\n+\tchar name[CNXK_NIX_INL_DEV_NAME_LEN];\n+\tconst struct rte_memzone *mz;\n+\tstruct roc_nix_inl_dev *dev;\n+\tint rc;\n+\n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY)\n+\t\treturn 0;\n+\n+\tmz = rte_memzone_lookup(nix_inl_dev_to_name(pci_dev, name));\n+\tif (!mz)\n+\t\treturn 0;\n+\n+\tdev = mz->addr;\n+\n+\t/* Cleanup inline dev */\n+\trc = roc_nix_inl_dev_fini(dev);\n+\tif (rc) {\n+\t\tplt_err(\"Failed to cleanup inl dev, rc=%d(%s)\", rc,\n+\t\t\troc_error_msg_get(rc));\n+\t\treturn rc;\n+\t}\n+\n+\trte_memzone_free(mz);\n+\treturn 0;\n+}\n+\n+static int\n+cnxk_nix_inl_dev_probe(struct rte_pci_driver *pci_drv,\n+\t\t       struct rte_pci_device *pci_dev)\n+{\n+\tchar name[CNXK_NIX_INL_DEV_NAME_LEN];\n+\tstruct roc_nix_inl_dev *inl_dev;\n+\tconst struct rte_memzone *mz;\n+\tint rc = -ENOMEM;\n+\n+\tRTE_SET_USED(pci_drv);\n+\n+\trc = roc_plt_init();\n+\tif (rc) {\n+\t\tplt_err(\"Failed to initialize platform model, rc=%d\", rc);\n+\t\treturn rc;\n+\t}\n+\n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY)\n+\t\treturn 0;\n+\n+\tmz = rte_memzone_reserve_aligned(nix_inl_dev_to_name(pci_dev, name),\n+\t\t\t\t\t sizeof(*inl_dev), SOCKET_ID_ANY, 0,\n+\t\t\t\t\t RTE_CACHE_LINE_SIZE);\n+\tif (mz == NULL)\n+\t\treturn rc;\n+\n+\tinl_dev = mz->addr;\n+\tinl_dev->pci_dev = pci_dev;\n+\n+\t/* Parse devargs string */\n+\trc = nix_inl_parse_devargs(pci_dev->device.devargs, inl_dev);\n+\tif (rc) {\n+\t\tplt_err(\"Failed to parse devargs rc=%d\", rc);\n+\t\tgoto free_mem;\n+\t}\n+\n+\trc = roc_nix_inl_dev_init(inl_dev);\n+\tif (rc) {\n+\t\tplt_err(\"Failed to init nix inl device, rc=%d(%s)\", rc,\n+\t\t\troc_error_msg_get(rc));\n+\t\tgoto free_mem;\n+\t}\n+\n+\treturn 0;\n+free_mem:\n+\trte_memzone_free(mz);\n+\treturn rc;\n+}\n+\n+static const struct rte_pci_id cnxk_nix_inl_pci_map[] = {\n+\t{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_PF)},\n+\t{RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CNXK_RVU_NIX_INL_VF)},\n+\t{\n+\t\t.vendor_id = 0,\n+\t},\n+};\n+\n+static struct rte_pci_driver cnxk_nix_inl_pci = {\n+\t.id_table = cnxk_nix_inl_pci_map,\n+\t.drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,\n+\t.probe = cnxk_nix_inl_dev_probe,\n+\t.remove = cnxk_nix_inl_dev_remove,\n+};\n+\n+RTE_PMD_REGISTER_PCI(cnxk_nix_inl, cnxk_nix_inl_pci);\n+RTE_PMD_REGISTER_PCI_TABLE(cnxk_nix_inl, cnxk_nix_inl_pci_map);\n+RTE_PMD_REGISTER_KMOD_DEP(cnxk_nix_inl, \"vfio-pci\");\n+\n+RTE_PMD_REGISTER_PARAM_STRING(cnxk_nix_inl,\n+\t\t\t      CNXK_NIX_INL_SELFTEST \"=1\"\n+\t\t\t      CNXK_NIX_INL_IPSEC_IN_MAX_SPI \"=<1-65535>\");\ndiff --git a/drivers/net/cnxk/cnxk_lookup.c b/drivers/net/cnxk/cnxk_lookup.c\nindex 0152ad9..f6ec768 100644\n--- a/drivers/net/cnxk/cnxk_lookup.c\n+++ b/drivers/net/cnxk/cnxk_lookup.c\n@@ -7,12 +7,8 @@\n \n #include \"cnxk_ethdev.h\"\n \n-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */\n-#define ERRCODE_ERRLEN_WIDTH 12\n-#define ERR_ARRAY_SZ\t     ((BIT(ERRCODE_ERRLEN_WIDTH)) * sizeof(uint32_t))\n-\n-#define SA_TBL_SZ\t(RTE_MAX_ETHPORTS * sizeof(uint64_t))\n-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_TBL_SZ)\n+#define SA_BASE_TBL_SZ\t(RTE_MAX_ETHPORTS * sizeof(uintptr_t))\n+#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ + SA_BASE_TBL_SZ)\n const uint32_t *\n cnxk_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)\n {\n@@ -324,3 +320,45 @@ cnxk_nix_fastpath_lookup_mem_get(void)\n \t}\n \treturn NULL;\n }\n+\n+int\n+cnxk_nix_lookup_mem_sa_base_set(struct cnxk_eth_dev *dev)\n+{\n+\tvoid *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();\n+\tuint16_t port = dev->eth_dev->data->port_id;\n+\tuintptr_t sa_base_tbl;\n+\tuintptr_t sa_base;\n+\tuint8_t sa_w;\n+\n+\tif (!lookup_mem)\n+\t\treturn -EIO;\n+\n+\tsa_base = roc_nix_inl_inb_sa_base_get(&dev->nix, dev->inb.inl_dev);\n+\tif (!sa_base)\n+\t\treturn -ENOTSUP;\n+\n+\tsa_w = plt_log2_u32(dev->nix.ipsec_in_max_spi + 1);\n+\n+\t/* Set SA Base in lookup mem */\n+\tsa_base_tbl = (uintptr_t)lookup_mem;\n+\tsa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;\n+\t*((uintptr_t *)sa_base_tbl + port) = sa_base | sa_w;\n+\treturn 0;\n+}\n+\n+int\n+cnxk_nix_lookup_mem_sa_base_clear(struct cnxk_eth_dev *dev)\n+{\n+\tvoid *lookup_mem = cnxk_nix_fastpath_lookup_mem_get();\n+\tuint16_t port = dev->eth_dev->data->port_id;\n+\tuintptr_t sa_base_tbl;\n+\n+\tif (!lookup_mem)\n+\t\treturn -EIO;\n+\n+\t/* Set SA Base in lookup mem */\n+\tsa_base_tbl = (uintptr_t)lookup_mem;\n+\tsa_base_tbl += PTYPE_ARRAY_SZ + ERR_ARRAY_SZ;\n+\t*((uintptr_t *)sa_base_tbl + port) = 0;\n+\treturn 0;\n+}\ndiff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build\nindex d4cdd17..6cc30c3 100644\n--- a/drivers/net/cnxk/meson.build\n+++ b/drivers/net/cnxk/meson.build\n@@ -12,6 +12,7 @@ sources = files(\n         'cnxk_ethdev.c',\n         'cnxk_ethdev_devargs.c',\n         'cnxk_ethdev_ops.c',\n+        'cnxk_ethdev_sec.c',\n         'cnxk_link.c',\n         'cnxk_lookup.c',\n         'cnxk_ptp.c',\n@@ -22,6 +23,7 @@ sources = files(\n # CN9K\n sources += files(\n         'cn9k_ethdev.c',\n+        'cn9k_ethdev_sec.c',\n         'cn9k_rte_flow.c',\n         'cn9k_rx.c',\n         'cn9k_rx_mseg.c',\ndiff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map\nindex c2e0723..b9da6b1 100644\n--- a/drivers/net/cnxk/version.map\n+++ b/drivers/net/cnxk/version.map\n@@ -1,3 +1,8 @@\n DPDK_22 {\n \tlocal: *;\n };\n+\n+INTERNAL {\n+\tglobal:\n+\tcnxk_nix_inb_mode_set;\n+};\n",
    "prefixes": [
        "15/27"
    ]
}