get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/99319/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 99319,
    "url": "https://patches.dpdk.org/api/patches/99319/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/20210920135202.1739660-5-radu.nicolau@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20210920135202.1739660-5-radu.nicolau@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20210920135202.1739660-5-radu.nicolau@intel.com",
    "date": "2021-09-20T13:52:00",
    "name": "[v3,4/6] net/iavf: add iAVF IPsec inline crypto support",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "15ee85bf65269c77b10f57abd7973be4b9a79091",
    "submitter": {
        "id": 743,
        "url": "https://patches.dpdk.org/api/people/743/?format=api",
        "name": "Radu Nicolau",
        "email": "radu.nicolau@intel.com"
    },
    "delegate": {
        "id": 1540,
        "url": "https://patches.dpdk.org/api/users/1540/?format=api",
        "username": "qzhan15",
        "first_name": "Qi",
        "last_name": "Zhang",
        "email": "qi.z.zhang@intel.com"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/20210920135202.1739660-5-radu.nicolau@intel.com/mbox/",
    "series": [
        {
            "id": 19046,
            "url": "https://patches.dpdk.org/api/series/19046/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=19046",
            "date": "2021-09-20T13:51:56",
            "name": "iavf: add iAVF IPsec inline crypto support",
            "version": 3,
            "mbox": "https://patches.dpdk.org/series/19046/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/99319/comments/",
    "check": "warning",
    "checks": "https://patches.dpdk.org/api/patches/99319/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 3E309A0548;\n\tMon, 20 Sep 2021 16:01:17 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 541EF41120;\n\tMon, 20 Sep 2021 16:00:51 +0200 (CEST)",
            "from mga06.intel.com (mga06.intel.com [134.134.136.31])\n by mails.dpdk.org (Postfix) with ESMTP id 2BCBF410F4\n for <dev@dpdk.org>; Mon, 20 Sep 2021 16:00:45 +0200 (CEST)",
            "from orsmga008.jf.intel.com ([10.7.209.65])\n by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 20 Sep 2021 07:00:45 -0700",
            "from silpixa00400884.ir.intel.com ([10.243.22.82])\n by orsmga008.jf.intel.com with ESMTP; 20 Sep 2021 07:00:42 -0700"
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,10112\"; a=\"284148517\"",
            "E=Sophos;i=\"5.85,308,1624345200\"; d=\"scan'208\";a=\"284148517\"",
            "E=Sophos;i=\"5.85,308,1624345200\"; d=\"scan'208\";a=\"483824709\""
        ],
        "X-ExtLoop1": "1",
        "From": "Radu Nicolau <radu.nicolau@intel.com>",
        "To": "Jingjing Wu <jingjing.wu@intel.com>, Beilei Xing <beilei.xing@intel.com>,\n Ray Kinsella <mdr@ashroe.eu>",
        "Cc": "dev@dpdk.org, declan.doherty@intel.com, abhijit.sinha@intel.com,\n qi.z.zhang@intel.com, bruce.richardson@intel.com,\n konstantin.ananyev@intel.com, Radu Nicolau <radu.nicolau@intel.com>",
        "Date": "Mon, 20 Sep 2021 14:52:00 +0100",
        "Message-Id": "<20210920135202.1739660-5-radu.nicolau@intel.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "In-Reply-To": "<20210920135202.1739660-1-radu.nicolau@intel.com>",
        "References": "<20210909142428.750634-1-radu.nicolau@intel.com>\n <20210920135202.1739660-1-radu.nicolau@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dpdk-dev] [PATCH v3 4/6] net/iavf: add iAVF IPsec inline crypto\n support",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Add support for inline crypto for IPsec, for ESP transport and\ntunnel over IPv4 and IPv6, as well as supporting the offload for\nESP over UDP, and inconjunction with TSO for UDP and TCP flows.\nImplement support for rte_security packet metadata\n\nAdd definition for IPsec descriptors, extend support for offload\nin data and context descriptor to support\n\nAdd support to virtual channel mailbox for IPsec Crypto request\noperations. IPsec Crypto requests receive an initial acknowledgement\nfrom phsyical function driver of receipt of request and then an\nasynchronous response with success/failure of request including any\nresponse data.\n\nAdd enhanced descriptor debugging\n\nRefactor of scalar tx burst function to support integration of offload\n\nSigned-off-by: Declan Doherty <declan.doherty@intel.com>\nSigned-off-by: Abhijit Sinha <abhijit.sinha@intel.com>\nSigned-off-by: Radu Nicolau <radu.nicolau@intel.com>\n---\n drivers/net/iavf/iavf.h                       |   10 +\n drivers/net/iavf/iavf_ethdev.c                |   41 +-\n drivers/net/iavf/iavf_generic_flow.c          |   16 +\n drivers/net/iavf/iavf_generic_flow.h          |    2 +\n drivers/net/iavf/iavf_ipsec_crypto.c          | 1918 +++++++++++++++++\n drivers/net/iavf/iavf_ipsec_crypto.h          |   96 +\n .../net/iavf/iavf_ipsec_crypto_capabilities.h |  383 ++++\n drivers/net/iavf/iavf_rxtx.c                  |  203 +-\n drivers/net/iavf/iavf_rxtx.h                  |   94 +-\n drivers/net/iavf/iavf_vchnl.c                 |   29 +\n drivers/net/iavf/meson.build                  |    3 +-\n drivers/net/iavf/rte_pmd_iavf.h               |    1 +\n drivers/net/iavf/version.map                  |    3 +\n 13 files changed, 2777 insertions(+), 22 deletions(-)\n create mode 100644 drivers/net/iavf/iavf_ipsec_crypto.c\n create mode 100644 drivers/net/iavf/iavf_ipsec_crypto.h\n create mode 100644 drivers/net/iavf/iavf_ipsec_crypto_capabilities.h",
    "diff": "diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h\nindex 8c7f7c0bed..934ef48278 100644\n--- a/drivers/net/iavf/iavf.h\n+++ b/drivers/net/iavf/iavf.h\n@@ -217,6 +217,7 @@ struct iavf_info {\n \trte_spinlock_t flow_ops_lock;\n \tstruct iavf_parser_list rss_parser_list;\n \tstruct iavf_parser_list dist_parser_list;\n+\tstruct iavf_parser_list ipsec_crypto_parser_list;\n \n \tstruct iavf_fdir_info fdir; /* flow director info */\n \t/* indicate large VF support enabled or not */\n@@ -239,6 +240,7 @@ enum iavf_proto_xtr_type {\n \tIAVF_PROTO_XTR_IPV6_FLOW,\n \tIAVF_PROTO_XTR_TCP,\n \tIAVF_PROTO_XTR_IP_OFFSET,\n+\tIAVF_PROTO_XTR_IPSEC_CRYPTO_SAID,\n \tIAVF_PROTO_XTR_MAX,\n };\n \n@@ -250,11 +252,14 @@ struct iavf_devargs {\n \tuint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];\n };\n \n+struct iavf_security_ctx;\n+\n /* Structure to store private data for each VF instance. */\n struct iavf_adapter {\n \tstruct iavf_hw hw;\n \tstruct rte_eth_dev *eth_dev;\n \tstruct iavf_info vf;\n+\tstruct iavf_security_ctx *security_ctx;\n \n \tbool rx_bulk_alloc_allowed;\n \t/* For vector PMD */\n@@ -273,6 +278,8 @@ struct iavf_adapter {\n \t(&((struct iavf_adapter *)adapter)->vf)\n #define IAVF_DEV_PRIVATE_TO_HW(adapter) \\\n \t(&((struct iavf_adapter *)adapter)->hw)\n+#define IAVF_DEV_PRIVATE_TO_IAVF_SECURITY_CTX(adapter) \\\n+\t(((struct iavf_adapter *)adapter)->security_ctx)\n \n /* IAVF_VSI_TO */\n #define IAVF_VSI_TO_HW(vsi) \\\n@@ -415,5 +422,8 @@ int iavf_set_q_tc_map(struct rte_eth_dev *dev,\n \t\t\tuint16_t size);\n void iavf_tm_conf_init(struct rte_eth_dev *dev);\n void iavf_tm_conf_uninit(struct rte_eth_dev *dev);\n+int iavf_ipsec_crypto_request(struct iavf_adapter *adapter,\n+\t\tuint8_t *msg, size_t msg_len,\n+\t\tuint8_t *resp_msg, size_t resp_msg_len);\n extern const struct rte_tm_ops iavf_tm_ops;\n #endif /* _IAVF_ETHDEV_H_ */\ndiff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c\nindex c131461517..294be1a022 100644\n--- a/drivers/net/iavf/iavf_ethdev.c\n+++ b/drivers/net/iavf/iavf_ethdev.c\n@@ -29,6 +29,7 @@\n #include \"iavf_rxtx.h\"\n #include \"iavf_generic_flow.h\"\n #include \"rte_pmd_iavf.h\"\n+#include \"iavf_ipsec_crypto.h\"\n \n /* devargs */\n #define IAVF_PROTO_XTR_ARG         \"proto_xtr\"\n@@ -70,6 +71,11 @@ static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {\n \t[IAVF_PROTO_XTR_IP_OFFSET] = {\n \t\t.param = { .name = \"intel_pmd_dynflag_proto_xtr_ip_offset\" },\n \t\t.ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask },\n+\t[IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID] = {\n+\t\t.param = {\n+\t\t.name = \"intel_pmd_dynflag_proto_xtr_ipsec_crypto_said\" },\n+\t\t.ol_flag =\n+\t\t\t&rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask },\n };\n \n static int iavf_dev_configure(struct rte_eth_dev *dev);\n@@ -922,6 +928,9 @@ iavf_dev_stop(struct rte_eth_dev *dev)\n \tiavf_add_del_mc_addr_list(adapter, vf->mc_addrs, vf->mc_addrs_num,\n \t\t\t\t  false);\n \n+\t/* free iAVF security device context all related resources */\n+\tiavf_security_ctx_destroy(adapter);\n+\n \tadapter->stopped = 1;\n \tdev->data->dev_started = 0;\n \n@@ -931,7 +940,9 @@ iavf_dev_stop(struct rte_eth_dev *dev)\n static int\n iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)\n {\n-\tstruct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);\n+\tstruct iavf_adapter *adapter =\n+\t\tIAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);\n+\tstruct iavf_info *vf = &adapter->vf;\n \n \tdev_info->max_rx_queues = IAVF_MAX_NUM_QUEUES_LV;\n \tdev_info->max_tx_queues = IAVF_MAX_NUM_QUEUES_LV;\n@@ -974,6 +985,11 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)\n \tif (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)\n \t\tdev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;\n \n+\tif (iavf_ipsec_crypto_supported(adapter)) {\n+\t\tdev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;\n+\t\tdev_info->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;\n+\t}\n+\n \tdev_info->default_rxconf = (struct rte_eth_rxconf) {\n \t\t.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,\n \t\t.rx_drop_en = 0,\n@@ -1730,6 +1746,7 @@ iavf_lookup_proto_xtr_type(const char *flex_name)\n \t\t{ \"ipv6_flow\", IAVF_PROTO_XTR_IPV6_FLOW },\n \t\t{ \"tcp\",       IAVF_PROTO_XTR_TCP       },\n \t\t{ \"ip_offset\", IAVF_PROTO_XTR_IP_OFFSET },\n+\t\t{ \"ipsec_crypto_said\", IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID },\n \t};\n \tuint32_t i;\n \n@@ -1738,8 +1755,8 @@ iavf_lookup_proto_xtr_type(const char *flex_name)\n \t\t\treturn xtr_type_map[i].type;\n \t}\n \n-\tPMD_DRV_LOG(ERR, \"wrong proto_xtr type, \"\n-\t\t    \"it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset\");\n+\tPMD_DRV_LOG(ERR, \"wrong proto_xtr type, it should be: \"\n+\t\t\t\"vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset|ipsec_crypto_said\");\n \n \treturn -1;\n }\n@@ -2357,6 +2374,24 @@ iavf_dev_init(struct rte_eth_dev *eth_dev)\n \t\tgoto flow_init_err;\n \t}\n \n+\t/** Check if the IPsec Crypto offload is supported and create\n+\t *  security_ctx if it is.\n+\t */\n+\tif (iavf_ipsec_crypto_supported(adapter)) {\n+\t\t/* Initialize security_ctx only for primary process*/\n+\t\tret = iavf_security_ctx_create(adapter);\n+\t\tif (ret) {\n+\t\t\tPMD_INIT_LOG(ERR, \"failed to create ipsec crypto security instance\");\n+\t\t\treturn ret;\n+\t\t}\n+\n+\t\tret = iavf_security_init(adapter);\n+\t\tif (ret) {\n+\t\t\tPMD_INIT_LOG(ERR, \"failed to initialized ipsec crypto resources\");\n+\t\t\treturn ret;\n+\t\t}\n+\t}\n+\n \tiavf_default_rss_disable(adapter);\n \n \treturn 0;\ndiff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c\nindex 1fe270fb22..d85e82a950 100644\n--- a/drivers/net/iavf/iavf_generic_flow.c\n+++ b/drivers/net/iavf/iavf_generic_flow.c\n@@ -1635,6 +1635,7 @@ iavf_flow_init(struct iavf_adapter *ad)\n \tTAILQ_INIT(&vf->flow_list);\n \tTAILQ_INIT(&vf->rss_parser_list);\n \tTAILQ_INIT(&vf->dist_parser_list);\n+\tTAILQ_INIT(&vf->ipsec_crypto_parser_list);\n \trte_spinlock_init(&vf->flow_ops_lock);\n \n \tTAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {\n@@ -1709,6 +1710,9 @@ iavf_register_parser(struct iavf_flow_parser *parser,\n \t} else if (parser->engine->type == IAVF_FLOW_ENGINE_FDIR) {\n \t\tlist = &vf->dist_parser_list;\n \t\tTAILQ_INSERT_HEAD(list, parser_node, node);\n+\t} else if (parser->engine->type == IAVF_FLOW_ENGINE_IPSEC_CRYPTO) {\n+\t\tlist = &vf->ipsec_crypto_parser_list;\n+\t\tTAILQ_INSERT_HEAD(list, parser_node, node);\n \t} else {\n \t\treturn -EINVAL;\n \t}\n@@ -2018,6 +2022,14 @@ iavf_flow_process_filter(struct rte_eth_dev *dev,\n \n \t*engine = iavf_parse_engine(ad, flow, &vf->dist_parser_list, pattern,\n \t\t\t\t    actions, error);\n+\tif (*engine)\n+\t\treturn 0;\n+\n+\t*engine = iavf_parse_engine(ad, flow, &vf->ipsec_crypto_parser_list,\n+\t\t\tpattern, actions, error);\n+\tif (*engine)\n+\t\treturn 0;\n+\n \n \tif (!*engine) {\n \t\trte_flow_error_set(error, EINVAL,\n@@ -2064,6 +2076,10 @@ iavf_flow_create(struct rte_eth_dev *dev,\n \t\treturn flow;\n \t}\n \n+\t/* Special case for inline crypto egress flows */\n+\tif (attr->egress && actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY)\n+\t\tgoto free_flow;\n+\n \tret = iavf_flow_process_filter(dev, flow, attr, pattern, actions,\n \t\t\t&engine, iavf_parse_engine_create, error);\n \tif (ret < 0) {\ndiff --git a/drivers/net/iavf/iavf_generic_flow.h b/drivers/net/iavf/iavf_generic_flow.h\nindex 4794d1fb80..a471c0331f 100644\n--- a/drivers/net/iavf/iavf_generic_flow.h\n+++ b/drivers/net/iavf/iavf_generic_flow.h\n@@ -449,6 +449,7 @@ typedef int (*parse_pattern_action_t)(struct iavf_adapter *ad,\n /* engine types. */\n enum iavf_flow_engine_type {\n \tIAVF_FLOW_ENGINE_NONE = 0,\n+\tIAVF_FLOW_ENGINE_IPSEC_CRYPTO,\n \tIAVF_FLOW_ENGINE_FDIR,\n \tIAVF_FLOW_ENGINE_HASH,\n \tIAVF_FLOW_ENGINE_MAX,\n@@ -462,6 +463,7 @@ enum iavf_flow_engine_type {\n  */\n enum iavf_flow_classification_stage {\n \tIAVF_FLOW_STAGE_NONE = 0,\n+\tIAVF_FLOW_STAGE_IPSEC_CRYPTO,\n \tIAVF_FLOW_STAGE_RSS,\n \tIAVF_FLOW_STAGE_DISTRIBUTOR,\n \tIAVF_FLOW_STAGE_MAX,\ndiff --git a/drivers/net/iavf/iavf_ipsec_crypto.c b/drivers/net/iavf/iavf_ipsec_crypto.c\nnew file mode 100644\nindex 0000000000..604766b640\n--- /dev/null\n+++ b/drivers/net/iavf/iavf_ipsec_crypto.c\n@@ -0,0 +1,1918 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright(c) 2020 Intel Corporation\n+ */\n+\n+#include <rte_cryptodev.h>\n+#include <rte_ethdev.h>\n+#include <rte_security_driver.h>\n+#include <rte_security.h>\n+\n+#include \"iavf.h\"\n+#include \"iavf_rxtx.h\"\n+#include \"iavf_log.h\"\n+#include \"iavf_generic_flow.h\"\n+\n+#include \"iavf_ipsec_crypto.h\"\n+#include \"iavf_ipsec_crypto_capabilities.h\"\n+\n+/**\n+ * iAVF IPsec Crypto Security Context\n+ */\n+struct iavf_security_ctx {\n+\tstruct iavf_adapter *adapter;\n+\tint pkt_md_offset;\n+\tstruct rte_cryptodev_capabilities *crypto_capabilities;\n+};\n+\n+/**\n+ * iAVF IPsec Crypto Security Session Parameters\n+ */\n+struct iavf_security_session {\n+\tstruct iavf_adapter *adapter;\n+\n+\tenum rte_security_ipsec_sa_mode mode;\n+\tenum rte_security_ipsec_tunnel_type type;\n+\tenum rte_security_ipsec_sa_direction direction;\n+\n+\tstruct {\n+\t\tuint32_t spi; /* Security Parameter Index */\n+\t\tuint32_t hw_idx; /* SA Index in hardware table */\n+\t} sa;\n+\n+\tstruct {\n+\t\tuint8_t enabled :1;\n+\t\tunion {\n+\t\t\tuint64_t value;\n+\t\t\tstruct {\n+\t\t\t\tuint32_t hi;\n+\t\t\t\tuint32_t low;\n+\t\t\t};\n+\t\t};\n+\t} esn;\n+\n+\tstruct {\n+\t\tuint8_t enabled :1;\n+\t\tuint16_t mss;\n+\t} tso;\n+\n+\tstruct {\n+\t\tuint8_t enabled :1;\n+\t} udp_encap;\n+\n+\tsize_t iv_sz;\n+\tsize_t icv_sz;\n+\tsize_t block_sz;\n+\n+\tstruct iavf_ipsec_crypto_pkt_metadata pkt_metadata_template;\n+};\n+/**\n+ *  IV Length field in IPsec Tx Desc uses the following encoding:\n+ *\n+ *  0B - 0\n+ *  4B - 1\n+ *  8B - 2\n+ *  16B - 3\n+ *\n+ * but we also need the IV Length for TSO to correctly calculate the total\n+ * header length so placing it in the upper 6-bits here for easier reterival.\n+ */\n+static inline uint8_t\n+calc_ipsec_desc_iv_len_field(uint16_t iv_sz)\n+{\n+\tuint8_t iv_length = IAVF_IPSEC_IV_LEN_NONE;\n+\n+\tswitch (iv_sz) {\n+\tcase 4:\n+\t\tiv_length = IAVF_IPSEC_IV_LEN_DW;\n+\t\tbreak;\n+\tcase 8:\n+\t\tiv_length = IAVF_IPSEC_IV_LEN_DDW;\n+\t\tbreak;\n+\tcase 16:\n+\t\tiv_length = IAVF_IPSEC_IV_LEN_QDW;\n+\t\tbreak;\n+\t}\n+\n+\treturn (iv_sz << 2) | iv_length;\n+}\n+\n+\n+static unsigned int\n+iavf_ipsec_crypto_session_size_get(void *device __rte_unused)\n+{\n+\treturn sizeof(struct iavf_security_session);\n+}\n+\n+static const struct rte_cryptodev_symmetric_capability *\n+get_capability(struct iavf_security_ctx *iavf_sctx,\n+\tuint32_t algo, uint32_t type)\n+{\n+\tconst struct rte_cryptodev_capabilities *capability;\n+\tint i = 0;\n+\n+\tcapability = &iavf_sctx->crypto_capabilities[i];\n+\n+\twhile (capability->op != RTE_CRYPTO_OP_TYPE_UNDEFINED) {\n+\t\tif (capability->op == RTE_CRYPTO_OP_TYPE_SYMMETRIC &&\n+\t\t\tcapability->sym.xform_type == type &&\n+\t\t\tcapability->sym.cipher.algo == algo)\n+\t\t\treturn &capability->sym;\n+\t\t/** try next capability */\n+\t\tcapability = &iavf_crypto_capabilities[i++];\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static const struct rte_cryptodev_symmetric_capability *\n+get_auth_capability(struct iavf_security_ctx *iavf_sctx,\n+\tenum rte_crypto_auth_algorithm algo)\n+{\n+\treturn get_capability(iavf_sctx, algo, RTE_CRYPTO_SYM_XFORM_AUTH);\n+}\n+\n+static const struct rte_cryptodev_symmetric_capability *\n+get_cipher_capability(struct iavf_security_ctx *iavf_sctx,\n+\tenum rte_crypto_cipher_algorithm algo)\n+{\n+\treturn get_capability(iavf_sctx, algo, RTE_CRYPTO_SYM_XFORM_CIPHER);\n+}\n+static const struct rte_cryptodev_symmetric_capability *\n+get_aead_capability(struct iavf_security_ctx *iavf_sctx,\n+\tenum rte_crypto_aead_algorithm algo)\n+{\n+\treturn get_capability(iavf_sctx, algo, RTE_CRYPTO_SYM_XFORM_AEAD);\n+}\n+\n+static uint16_t\n+get_cipher_blocksize(struct iavf_security_ctx *iavf_sctx,\n+\tenum rte_crypto_cipher_algorithm algo)\n+{\n+\tconst struct rte_cryptodev_symmetric_capability *capability;\n+\n+\tcapability = get_cipher_capability(iavf_sctx, algo);\n+\tif (capability == NULL)\n+\t\treturn 0;\n+\n+\treturn capability->cipher.block_size;\n+}\n+\n+static uint16_t\n+get_aead_blocksize(struct iavf_security_ctx *iavf_sctx,\n+\tenum rte_crypto_aead_algorithm algo)\n+{\n+\tconst struct rte_cryptodev_symmetric_capability *capability;\n+\n+\tcapability = get_aead_capability(iavf_sctx, algo);\n+\tif (capability == NULL)\n+\t\treturn 0;\n+\n+\treturn capability->cipher.block_size;\n+}\n+\n+static uint16_t\n+get_auth_blocksize(struct iavf_security_ctx *iavf_sctx,\n+\tenum rte_crypto_auth_algorithm algo)\n+{\n+\tconst struct rte_cryptodev_symmetric_capability *capability;\n+\n+\tcapability = get_auth_capability(iavf_sctx, algo);\n+\tif (capability == NULL)\n+\t\treturn 0;\n+\n+\treturn capability->auth.block_size;\n+}\n+\n+static uint8_t\n+calc_context_desc_cipherblock_sz(size_t len)\n+{\n+\tswitch (len) {\n+\tcase 8:\n+\t\treturn 0x2;\n+\tcase 16:\n+\t\treturn 0x3;\n+\tdefault:\n+\t\treturn 0x0;\n+\t}\n+}\n+\n+static int\n+valid_length(uint32_t len, uint32_t min, uint32_t max, uint32_t increment)\n+{\n+\tif (len < min || len > max)\n+\t\treturn false;\n+\n+\tif (increment == 0)\n+\t\treturn true;\n+\n+\tif ((len - min) % increment)\n+\t\treturn false;\n+\n+\t/* make sure it fits in the key array */\n+\tif (len > VIRTCHNL_IPSEC_MAX_KEY_LEN)\n+\t\treturn false;\n+\n+\treturn true;\n+}\n+\n+static int\n+valid_auth_xform(struct iavf_security_ctx *iavf_sctx,\n+\tstruct rte_crypto_auth_xform *auth)\n+{\n+\tconst struct rte_cryptodev_symmetric_capability *capability;\n+\n+\tcapability = get_auth_capability(iavf_sctx, auth->algo);\n+\tif (capability == NULL)\n+\t\treturn false;\n+\n+\t/* verify key size */\n+\tif (!valid_length(auth->key.length,\n+\t\tcapability->auth.key_size.min,\n+\t\tcapability->auth.key_size.max,\n+\t\tcapability->aead.key_size.increment))\n+\t\treturn false;\n+\n+\treturn true;\n+}\n+\n+static int\n+valid_cipher_xform(struct iavf_security_ctx *iavf_sctx,\n+\tstruct rte_crypto_cipher_xform *cipher)\n+{\n+\tconst struct rte_cryptodev_symmetric_capability *capability;\n+\n+\tcapability = get_cipher_capability(iavf_sctx, cipher->algo);\n+\tif (capability == NULL)\n+\t\treturn false;\n+\n+\t/* verify key size */\n+\tif (!valid_length(cipher->key.length,\n+\t\tcapability->cipher.key_size.min,\n+\t\tcapability->cipher.key_size.max,\n+\t\tcapability->cipher.key_size.increment))\n+\t\treturn false;\n+\n+\treturn true;\n+}\n+\n+static int\n+valid_aead_xform(struct iavf_security_ctx *iavf_sctx,\n+\tstruct rte_crypto_aead_xform *aead)\n+{\n+\tconst struct rte_cryptodev_symmetric_capability *capability;\n+\n+\tcapability = get_aead_capability(iavf_sctx, aead->algo);\n+\tif (capability == NULL)\n+\t\treturn false;\n+\n+\t/* verify key size */\n+\tif (!valid_length(aead->key.length,\n+\t\tcapability->aead.key_size.min,\n+\t\tcapability->aead.key_size.max,\n+\t\tcapability->aead.key_size.increment))\n+\t\treturn false;\n+\n+\treturn true;\n+}\n+\n+static int\n+iavf_ipsec_crypto_session_validate_conf(struct iavf_security_ctx *iavf_sctx,\n+\tstruct rte_security_session_conf *conf)\n+{\n+\t/** validate security action/protocol selection */\n+\tif (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO ||\n+\t\tconf->protocol != RTE_SECURITY_PROTOCOL_IPSEC) {\n+\t\tPMD_DRV_LOG(ERR, \"Invalid action / protocol specified\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/** validate IPsec protocol selection */\n+\tif (conf->ipsec.proto != RTE_SECURITY_IPSEC_SA_PROTO_ESP) {\n+\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec protocol specified\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/** validate selected options */\n+\tif (conf->ipsec.options.copy_dscp ||\n+\t\tconf->ipsec.options.copy_flabel ||\n+\t\tconf->ipsec.options.copy_df ||\n+\t\tconf->ipsec.options.dec_ttl ||\n+\t\tconf->ipsec.options.ecn ||\n+\t\tconf->ipsec.options.stats) {\n+\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec option specified\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/**\n+\t * Validate crypto xforms parameters.\n+\t *\n+\t * AEAD transforms can be used for either inbound/outbound IPsec SAs,\n+\t * for non-AEAD crypto transforms we explicitly only support CIPHER/AUTH\n+\t * for outbound and AUTH/CIPHER chained transforms for inbound IPsec.\n+\t */\n+\tif (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {\n+\t\tif (!valid_aead_xform(iavf_sctx, &conf->crypto_xform->aead)) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec option specified\");\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t} else if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&\n+\t\tconf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&\n+\t\tconf->crypto_xform->next &&\n+\t\tconf->crypto_xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) {\n+\t\tif (!valid_cipher_xform(iavf_sctx,\n+\t\t\t\t&conf->crypto_xform->cipher)) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec option specified\");\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\n+\t\tif (!valid_auth_xform(iavf_sctx,\n+\t\t\t\t&conf->crypto_xform->next->auth)) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec option specified\");\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t} else if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&\n+\t\tconf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&\n+\t\tconf->crypto_xform->next &&\n+\t\tconf->crypto_xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {\n+\t\tif (!valid_auth_xform(iavf_sctx, &conf->crypto_xform->auth)) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec option specified\");\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\n+\t\tif (!valid_cipher_xform(iavf_sctx,\n+\t\t\t\t&conf->crypto_xform->next->cipher)) {\n+\t\t\tPMD_DRV_LOG(ERR, \"Invalid IPsec option specified\");\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static void\n+sa_add_set_aead_params(struct virtchnl_ipsec_crypto_cfg_item *cfg,\n+\tstruct rte_crypto_aead_xform *aead, uint32_t salt)\n+{\n+\tcfg->crypto_type = VIRTCHNL_AEAD;\n+\n+\tswitch (aead->algo) {\n+\tcase RTE_CRYPTO_AEAD_AES_CCM:\n+\t\tcfg->algo_type = VIRTCHNL_AES_CCM; break;\n+\tcase RTE_CRYPTO_AEAD_AES_GCM:\n+\t\tcfg->algo_type = VIRTCHNL_AES_GCM; break;\n+\tcase RTE_CRYPTO_AEAD_CHACHA20_POLY1305:\n+\t\tcfg->algo_type = VIRTCHNL_CHACHA20_POLY1305; break;\n+\tdefault:\n+\t\tPMD_DRV_LOG(ERR, \"Invalid AEAD parameters\");\n+\t\tbreak;\n+\t}\n+\n+\tcfg->key_len = aead->key.length;\n+\tcfg->iv_len = aead->iv.length;\n+\tcfg->digest_len = aead->digest_length;\n+\tcfg->salt = salt;\n+\n+\tmemcpy(cfg->key_data, aead->key.data, cfg->key_len);\n+}\n+\n+static void\n+sa_add_set_cipher_params(struct virtchnl_ipsec_crypto_cfg_item *cfg,\n+\tstruct rte_crypto_cipher_xform *cipher, uint32_t salt)\n+{\n+\tcfg->crypto_type = VIRTCHNL_CIPHER;\n+\n+\tswitch (cipher->algo) {\n+\tcase RTE_CRYPTO_CIPHER_AES_CBC:\n+\t\tcfg->algo_type = VIRTCHNL_AES_CBC; break;\n+\tcase RTE_CRYPTO_CIPHER_3DES_CBC:\n+\t\tcfg->algo_type = VIRTCHNL_3DES_CBC; break;\n+\tcase RTE_CRYPTO_CIPHER_NULL:\n+\t\tcfg->algo_type = VIRTCHNL_CIPHER_NO_ALG; break;\n+\tcase RTE_CRYPTO_CIPHER_AES_CTR:\n+\t\tcfg->algo_type = VIRTCHNL_AES_CTR;\n+\t\tcfg->salt = salt;\n+\t\tbreak;\n+\tdefault:\n+\t\tPMD_DRV_LOG(ERR, \"Invalid cipher parameters\");\n+\t\tbreak;\n+\t}\n+\n+\tcfg->key_len = cipher->key.length;\n+\tcfg->iv_len = cipher->iv.length;\n+\tcfg->salt = salt;\n+\n+\tmemcpy(cfg->key_data, cipher->key.data, cfg->key_len);\n+}\n+\n+\n+static void\n+sa_add_set_auth_params(struct virtchnl_ipsec_crypto_cfg_item *cfg,\n+\tstruct rte_crypto_auth_xform *auth, uint32_t salt)\n+{\n+\tcfg->crypto_type = VIRTCHNL_AUTH;\n+\n+\tswitch (auth->algo) {\n+\tcase RTE_CRYPTO_AUTH_NULL:\n+\t\tcfg->algo_type = VIRTCHNL_HASH_NO_ALG; break;\n+\tcase RTE_CRYPTO_AUTH_AES_CBC_MAC:\n+\t\tcfg->algo_type = VIRTCHNL_AES_CBC_MAC; break;\n+\tcase RTE_CRYPTO_AUTH_AES_CMAC:\n+\t\tcfg->algo_type = VIRTCHNL_AES_CMAC; break;\n+\tcase RTE_CRYPTO_AUTH_AES_XCBC_MAC:\n+\t\tcfg->algo_type = VIRTCHNL_AES_XCBC_MAC; break;\n+\tcase RTE_CRYPTO_AUTH_MD5_HMAC:\n+\t\tcfg->algo_type = VIRTCHNL_MD5_HMAC; break;\n+\tcase RTE_CRYPTO_AUTH_SHA1_HMAC:\n+\t\tcfg->algo_type = VIRTCHNL_SHA1_HMAC; break;\n+\tcase RTE_CRYPTO_AUTH_SHA224_HMAC:\n+\t\tcfg->algo_type = VIRTCHNL_SHA224_HMAC; break;\n+\tcase RTE_CRYPTO_AUTH_SHA256_HMAC:\n+\t\tcfg->algo_type = VIRTCHNL_SHA256_HMAC; break;\n+\tcase RTE_CRYPTO_AUTH_SHA384_HMAC:\n+\t\tcfg->algo_type = VIRTCHNL_SHA384_HMAC; break;\n+\tcase RTE_CRYPTO_AUTH_SHA512_HMAC:\n+\t\tcfg->algo_type = VIRTCHNL_SHA512_HMAC; break;\n+\tcase RTE_CRYPTO_AUTH_AES_GMAC:\n+\t\tcfg->algo_type = VIRTCHNL_AES_GMAC;\n+\t\tcfg->salt = salt;\n+\t\tbreak;\n+\tdefault:\n+\t\tPMD_DRV_LOG(ERR, \"Invalid auth parameters\");\n+\t\tbreak;\n+\t}\n+\n+\tcfg->key_len = auth->key.length;\n+\tcfg->iv_len = auth->iv.length;\n+\tcfg->digest_len = auth->digest_length;\n+\n+\tmemcpy(cfg->key_data, auth->key.data, cfg->key_len);\n+}\n+\n+/**\n+ * Send SA add virtual channel request to Inline IPsec driver.\n+ *\n+ * Inline IPsec driver expects SPI and destination IP adderss to be in host\n+ * order, but DPDK APIs are network order, therefore we need to do a htonl\n+ * conversion of these parameters.\n+ */\n+static uint32_t\n+iavf_ipsec_crypto_security_association_add(struct iavf_adapter *adapter,\n+\tstruct rte_security_session_conf *conf)\n+{\n+\tstruct inline_ipsec_msg *request = NULL, *response = NULL;\n+\tstruct virtchnl_ipsec_sa_cfg *sa_cfg;\n+\tsize_t request_len, response_len;\n+\n+\tint rc;\n+\n+\trequest_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sa_cfg);\n+\n+\trequest = rte_malloc(\"iavf-sad-add-request\", request_len, 0);\n+\tif (request == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\tresponse_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sa_cfg_resp);\n+\tresponse = rte_malloc(\"iavf-sad-add-response\", response_len, 0);\n+\tif (response == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* set msg header params */\n+\trequest->ipsec_opcode = INLINE_IPSEC_OP_SA_CREATE;\n+\trequest->req_id = (uint16_t)0xDEADBEEF;\n+\n+\t/* set SA configuration params */\n+\tsa_cfg = (struct virtchnl_ipsec_sa_cfg *)(request + 1);\n+\n+\tsa_cfg->spi = htonl(conf->ipsec.spi);\n+\tsa_cfg->virtchnl_protocol_type = VIRTCHNL_PROTO_ESP;\n+\tsa_cfg->virtchnl_direction =\n+\t\tconf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS ?\n+\t\t\tVIRTCHNL_DIR_INGRESS : VIRTCHNL_DIR_EGRESS;\n+\n+\tif (conf->ipsec.options.esn) {\n+\t\tsa_cfg->esn_enabled = 1;\n+\t\tsa_cfg->esn_hi = conf->ipsec.esn.hi;\n+\t\tsa_cfg->esn_low = conf->ipsec.esn.low;\n+\t}\n+\n+\tif (conf->ipsec.options.udp_encap)\n+\t\tsa_cfg->udp_encap_enabled = 1;\n+\n+\t/* Set outer IP params */\n+\tif (conf->ipsec.tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {\n+\t\tsa_cfg->virtchnl_ip_type = VIRTCHNL_IPV4;\n+\n+\t\t*((uint32_t *)sa_cfg->dst_addr)\t=\n+\t\t\thtonl(conf->ipsec.tunnel.ipv4.dst_ip.s_addr);\n+\t} else {\n+\t\tuint32_t *v6_dst_addr =\n+\t\t\tconf->ipsec.tunnel.ipv6.dst_addr.s6_addr32;\n+\n+\t\tsa_cfg->virtchnl_ip_type = VIRTCHNL_IPV6;\n+\n+\t\t((uint32_t *)sa_cfg->dst_addr)[0] = htonl(v6_dst_addr[0]);\n+\t\t((uint32_t *)sa_cfg->dst_addr)[1] = htonl(v6_dst_addr[1]);\n+\t\t((uint32_t *)sa_cfg->dst_addr)[2] = htonl(v6_dst_addr[2]);\n+\t\t((uint32_t *)sa_cfg->dst_addr)[3] = htonl(v6_dst_addr[3]);\n+\t}\n+\n+\t/* set crypto params */\n+\tif (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {\n+\t\tsa_add_set_aead_params(&sa_cfg->crypto_cfg.items[0],\n+\t\t\t&conf->crypto_xform->aead, conf->ipsec.salt);\n+\n+\t} else if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {\n+\t\tsa_add_set_cipher_params(&sa_cfg->crypto_cfg.items[0],\n+\t\t\t&conf->crypto_xform->cipher, conf->ipsec.salt);\n+\t\tsa_add_set_auth_params(&sa_cfg->crypto_cfg.items[1],\n+\t\t\t&conf->crypto_xform->next->auth, conf->ipsec.salt);\n+\n+\t} else if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {\n+\t\tsa_add_set_auth_params(&sa_cfg->crypto_cfg.items[0],\n+\t\t\t&conf->crypto_xform->auth, conf->ipsec.salt);\n+\t\tif (conf->crypto_xform->auth.algo != RTE_CRYPTO_AUTH_AES_GMAC)\n+\t\t\tsa_add_set_cipher_params(&sa_cfg->crypto_cfg.items[1],\n+\t\t\t&conf->crypto_xform->next->cipher, conf->ipsec.salt);\n+\t}\n+\n+\t/* send virtual channel request to add SA to hardware database */\n+\trc = iavf_ipsec_crypto_request(adapter,\n+\t\t\t(uint8_t *)request, request_len,\n+\t\t\t(uint8_t *)response, response_len);\n+\tif (rc)\n+\t\tgoto update_cleanup;\n+\n+\t/* verify response id */\n+\tif (response->ipsec_opcode != request->ipsec_opcode ||\n+\t\tresponse->req_id != request->req_id)\n+\t\trc = -EFAULT;\n+\telse\n+\t\trc = response->ipsec_data.sa_cfg_resp->sa_handle;\n+update_cleanup:\n+\trte_free(response);\n+\trte_free(request);\n+\n+\treturn rc;\n+}\n+\n+static void\n+set_pkt_metadata_template(struct iavf_ipsec_crypto_pkt_metadata *template,\n+\tstruct iavf_security_session *sess)\n+{\n+\ttemplate->sa_idx = sess->sa.hw_idx;\n+\n+\tif (sess->udp_encap.enabled)\n+\t\ttemplate->ol_flags = IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT;\n+\n+\tif (sess->esn.enabled)\n+\t\ttemplate->ol_flags = IAVF_IPSEC_CRYPTO_OL_FLAGS_ESN;\n+\n+\ttemplate->len_iv = calc_ipsec_desc_iv_len_field(sess->iv_sz);\n+\ttemplate->ctx_desc_ipsec_params =\n+\t\t\tcalc_context_desc_cipherblock_sz(sess->block_sz) |\n+\t\t\t((uint8_t)(sess->icv_sz >> 2) << 3);\n+}\n+\n+static void\n+set_session_parameter(struct iavf_security_ctx *iavf_sctx,\n+\tstruct iavf_security_session *sess,\n+\tstruct rte_security_session_conf *conf, uint32_t sa_idx)\n+{\n+\tsess->adapter = iavf_sctx->adapter;\n+\n+\tsess->mode = conf->ipsec.mode;\n+\tsess->direction = conf->ipsec.direction;\n+\n+\tif (sess->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)\n+\t\tsess->type = conf->ipsec.tunnel.type;\n+\n+\tsess->sa.spi = conf->ipsec.spi;\n+\tsess->sa.hw_idx = sa_idx;\n+\n+\tif (conf->ipsec.options.esn) {\n+\t\tsess->esn.enabled = 1;\n+\t\tsess->esn.value = conf->ipsec.esn.value;\n+\t}\n+\n+\tif (conf->ipsec.options.tso) {\n+\t\tsess->tso.enabled = 1;\n+\t\tsess->tso.mss = conf->ipsec.mss;\n+\t}\n+\n+\tif (conf->ipsec.options.udp_encap)\n+\t\tsess->udp_encap.enabled = 1;\n+\n+\tif (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {\n+\t\tsess->block_sz = get_aead_blocksize(iavf_sctx,\n+\t\t\tconf->crypto_xform->aead.algo);\n+\t\tsess->iv_sz = conf->crypto_xform->aead.iv.length;\n+\t\tsess->icv_sz = conf->crypto_xform->aead.digest_length;\n+\t} else if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {\n+\t\tsess->block_sz = get_cipher_blocksize(iavf_sctx,\n+\t\t\tconf->crypto_xform->cipher.algo);\n+\t\tsess->iv_sz = conf->crypto_xform->cipher.iv.length;\n+\t\tsess->icv_sz = conf->crypto_xform->next->auth.digest_length;\n+\t} else if (conf->crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {\n+\t\tif (conf->crypto_xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC) {\n+\t\t\tsess->block_sz = get_auth_blocksize(iavf_sctx,\n+\t\t\t\tRTE_CRYPTO_SYM_XFORM_AUTH);\n+\t\t\tsess->iv_sz = conf->crypto_xform->auth.iv.length;\n+\t\t\tsess->icv_sz = conf->crypto_xform->auth.digest_length;\n+\t\t} else {\n+\t\t\tsess->block_sz = get_cipher_blocksize(iavf_sctx,\n+\t\t\t\tconf->crypto_xform->next->cipher.algo);\n+\t\t\tsess->iv_sz =\n+\t\t\t\tconf->crypto_xform->next->cipher.iv.length;\n+\t\t\tsess->icv_sz = conf->crypto_xform->auth.digest_length;\n+\t\t}\n+\t}\n+\n+\tset_pkt_metadata_template(&sess->pkt_metadata_template, sess);\n+}\n+\n+/**\n+ * Create IPsec Security Association for inline IPsec Crypto offload.\n+ *\n+ * 1. validate session configuration parameters\n+ * 2. allocate session memory from mempool\n+ * 3. add SA to hardware database\n+ * 4. set session parameters\n+ * 5. create packet metadata template for datapath\n+ */\n+static int\n+iavf_ipsec_crypto_session_create(void *device,\n+\t\t\t\t struct rte_security_session_conf *conf,\n+\t\t\t\t struct rte_security_session *session,\n+\t\t\t\t struct rte_mempool *mempool)\n+{\n+\tstruct rte_eth_dev *ethdev = device;\n+\tstruct iavf_adapter *adapter =\n+\t\tIAVF_DEV_PRIVATE_TO_ADAPTER(ethdev->data->dev_private);\n+\tstruct iavf_security_ctx *iavf_sctx = adapter->security_ctx;\n+\tstruct iavf_security_session *iavf_session = NULL;\n+\tint sa_idx;\n+\tint ret = 0;\n+\n+\t/* validate that all SA parameters are valid for device */\n+\tret = iavf_ipsec_crypto_session_validate_conf(iavf_sctx, conf);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* allocate session context */\n+\tif (rte_mempool_get(mempool, (void **)&iavf_session)) {\n+\t\tPMD_DRV_LOG(ERR, \"Cannot get object from sess mempool\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\t/* add SA to hardware database */\n+\tsa_idx = iavf_ipsec_crypto_security_association_add(adapter, conf);\n+\tif (sa_idx < 0) {\n+\t\tPMD_DRV_LOG(ERR,\n+\t\t\t\"Failed to add SA (spi: %d, mode: %s, direction: %s)\",\n+\t\t\tconf->ipsec.spi,\n+\t\t\tconf->ipsec.mode ==\n+\t\t\t\tRTE_SECURITY_IPSEC_SA_MODE_TRANSPORT ?\n+\t\t\t\t\"transport\" : \"tunnel\",\n+\t\t\tconf->ipsec.direction ==\n+\t\t\t\tRTE_SECURITY_IPSEC_SA_DIR_INGRESS ?\n+\t\t\t\t\"inbound\" : \"outbound\");\n+\n+\t\trte_mempool_put(mempool, iavf_session);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\t/* save data plane required session parameters */\n+\tset_session_parameter(iavf_sctx, iavf_session, conf, sa_idx);\n+\n+\t/* save to security session private data */\n+\tset_sec_session_private_data(session, iavf_session);\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * Check if valid ipsec crypto action.\n+ * SPI must be non-zero and SPI in session must match SPI value\n+ * passed into function.\n+ *\n+ * returns: 0 if invalid session or SPI value equal zero\n+ * returns: 1 if valid\n+ */\n+uint32_t\n+iavf_ipsec_crypto_action_valid(struct rte_eth_dev *ethdev,\n+\tconst struct rte_security_session *session, uint32_t spi)\n+{\n+\tstruct iavf_adapter *adapter =\n+\t\tIAVF_DEV_PRIVATE_TO_ADAPTER(ethdev->data->dev_private);\n+\tstruct iavf_security_session *sess = session->sess_private_data;\n+\n+\t/* verify we have a valid session and that it belong to this adapter */\n+\tif (unlikely(sess == NULL || sess->adapter != adapter))\n+\t\treturn false;\n+\n+\t/* SPI value must be non-zero */\n+\tif (spi == 0)\n+\t\treturn false;\n+\t/* Session SPI must patch flow SPI*/\n+\telse if (sess->sa.spi == spi) {\n+\t\treturn true;\n+\t\t/**\n+\t\t * TODO: We should add a way of tracking valid hw SA indices to\n+\t\t * make validation less brittle\n+\t\t */\n+\t}\n+\n+\t\treturn true;\n+}\n+\n+\n+/**\n+ * Send virtual channel security policy add request to IES driver.\n+ *\n+ * IES driver expects SPI and destination IP adderss to be in host\n+ * order, but DPDK APIs are network order, therefore we need to do a htonl\n+ * conversion of these parameters.\n+ */\n+int\n+iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,\n+\tuint32_t esp_spi,\n+\tuint8_t is_v4,\n+\trte_be32_t v4_dst_addr,\n+\tuint8_t *v6_dst_addr,\n+\tuint8_t drop)\n+{\n+\tstruct inline_ipsec_msg *request = NULL, *response = NULL;\n+\tsize_t request_len, response_len;\n+\tint rc = 0;\n+\n+\trequest_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sp_cfg);\n+\trequest = rte_malloc(\"iavf-inbound-security-policy-add-request\",\n+\t\t\t\trequest_len, 0);\n+\tif (request == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* set msg header params */\n+\trequest->ipsec_opcode = INLINE_IPSEC_OP_SP_CREATE;\n+\trequest->req_id = (uint16_t)0xDEADBEEF;\n+\n+\t/* ESP SPI */\n+\trequest->ipsec_data.sp_cfg->spi = htonl(esp_spi);\n+\n+\t/* Destination IP  */\n+\tif (is_v4) {\n+\t\trequest->ipsec_data.sp_cfg->table_id =\n+\t\t\t\tVIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV4;\n+\t\trequest->ipsec_data.sp_cfg->dip[0] = htonl(v4_dst_addr);\n+\t} else {\n+\t\trequest->ipsec_data.sp_cfg->table_id =\n+\t\t\t\tVIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV6;\n+\t\trequest->ipsec_data.sp_cfg->dip[0] =\n+\t\t\t\thtonl(((uint32_t *)v6_dst_addr)[0]);\n+\t\trequest->ipsec_data.sp_cfg->dip[1] =\n+\t\t\t\thtonl(((uint32_t *)v6_dst_addr)[1]);\n+\t\trequest->ipsec_data.sp_cfg->dip[2] =\n+\t\t\t\thtonl(((uint32_t *)v6_dst_addr)[2]);\n+\t\trequest->ipsec_data.sp_cfg->dip[3] =\n+\t\t\t\thtonl(((uint32_t *)v6_dst_addr)[3]);\n+\t}\n+\n+\trequest->ipsec_data.sp_cfg->drop = drop;\n+\n+\t/** Traffic Class/Congestion Domain currently not support */\n+\trequest->ipsec_data.sp_cfg->set_tc = 0;\n+\trequest->ipsec_data.sp_cfg->cgd = 0;\n+\n+\tresponse_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sp_cfg_resp);\n+\tresponse = rte_malloc(\"iavf-inbound-security-policy-add-response\",\n+\t\t\t\tresponse_len, 0);\n+\tif (response == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* send virtual channel request to add SA to hardware database */\n+\trc = iavf_ipsec_crypto_request(adapter,\n+\t\t\t(uint8_t *)request, request_len,\n+\t\t\t(uint8_t *)response, response_len);\n+\tif (rc)\n+\t\tgoto update_cleanup;\n+\n+\t/* verify response */\n+\tif (response->ipsec_opcode != request->ipsec_opcode ||\n+\t\tresponse->req_id != request->req_id)\n+\t\trc = -EFAULT;\n+\telse\n+\t\trc = response->ipsec_data.sp_cfg_resp->rule_id;\n+\n+update_cleanup:\n+\trte_free(request);\n+\trte_free(response);\n+\n+\treturn rc;\n+}\n+\n+static uint32_t\n+iavf_ipsec_crypto_sa_update_esn(struct iavf_adapter *adapter,\n+\tstruct iavf_security_session *sess)\n+{\n+\tstruct inline_ipsec_msg *request = NULL, *response = NULL;\n+\tsize_t request_len, response_len;\n+\tint rc = 0;\n+\n+\trequest_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sa_update);\n+\trequest = rte_malloc(\"iavf-sa-update-request\", request_len, 0);\n+\tif (request == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\tresponse_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_resp);\n+\tresponse = rte_malloc(\"iavf-sa-update-response\", response_len, 0);\n+\tif (response == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* set msg header params */\n+\trequest->ipsec_opcode = INLINE_IPSEC_OP_SA_UPDATE;\n+\trequest->req_id = (uint16_t)0xDEADBEEF;\n+\n+\t/* set request params */\n+\trequest->ipsec_data.sa_update->sa_index = sess->sa.hw_idx;\n+\trequest->ipsec_data.sa_update->esn_hi = sess->esn.hi;\n+\n+\t/* send virtual channel request to add SA to hardware database */\n+\trc = iavf_ipsec_crypto_request(adapter,\n+\t\t\t(uint8_t *)request, request_len,\n+\t\t\t(uint8_t *)response, response_len);\n+\tif (rc)\n+\t\tgoto update_cleanup;\n+\n+\t/* verify response */\n+\tif (response->ipsec_opcode != request->ipsec_opcode ||\n+\t\tresponse->req_id != request->req_id)\n+\t\trc = -EFAULT;\n+\telse\n+\t\trc = response->ipsec_data.ipsec_resp->resp;\n+\n+update_cleanup:\n+\trte_free(request);\n+\trte_free(response);\n+\n+\treturn rc;\n+}\n+\n+static int\n+iavf_ipsec_crypto_session_update(void *device,\n+\t\tstruct rte_security_session *session,\n+\t\tstruct rte_security_session_conf *conf)\n+{\n+\tstruct iavf_adapter *adapter = NULL;\n+\tstruct iavf_security_session *iavf_sess = NULL;\n+\tstruct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;\n+\tint rc = 0;\n+\n+\tadapter = IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);\n+\tiavf_sess = (struct iavf_security_session *)session->sess_private_data;\n+\n+\t/* verify we have a valid session and that it belong to this adapter */\n+\tif (unlikely(iavf_sess == NULL || iavf_sess->adapter != adapter))\n+\t\treturn -EINVAL;\n+\n+\t/* update esn hi 32-bits */\n+\tif (iavf_sess->esn.enabled && conf->ipsec.options.esn) {\n+\t\t/**\n+\t\t * Update ESN in hardware for inbound SA. Store in\n+\t\t * iavf_security_session for outbound SA for use\n+\t\t * in *iavf_ipsec_crypto_pkt_metadata_set* function.\n+\t\t */\n+\t\tif (iavf_sess->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)\n+\t\t\trc = iavf_ipsec_crypto_sa_update_esn(adapter,\n+\t\t\t\t\tiavf_sess);\n+\t\telse\n+\t\t\tiavf_sess->esn.hi = conf->ipsec.esn.hi;\n+\t}\n+\n+\t/* update TSO MSS size */\n+\tif (iavf_sess->tso.enabled && conf->ipsec.options.tso)\n+\t\tiavf_sess->tso.mss = conf->ipsec.mss;\n+\n+\treturn rc;\n+}\n+\n+static int\n+iavf_ipsec_crypto_session_stats_get(void *device __rte_unused,\n+\t\tstruct rte_security_session *session __rte_unused,\n+\t\tstruct rte_security_stats *stats __rte_unused)\n+{\n+\treturn -EOPNOTSUPP;\n+}\n+\n+int\n+iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,\n+\tuint8_t is_v4, uint32_t flow_id)\n+{\n+\tstruct inline_ipsec_msg *request = NULL, *response = NULL;\n+\tsize_t request_len, response_len;\n+\tint rc = 0;\n+\n+\trequest_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sp_destroy);\n+\trequest = rte_malloc(\"iavf-sp-del-request\", request_len, 0);\n+\tif (request == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\tresponse_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_resp);\n+\tresponse = rte_malloc(\"iavf-sp-del-response\", response_len, 0);\n+\tif (response == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* set msg header params */\n+\trequest->ipsec_opcode = INLINE_IPSEC_OP_SP_DESTROY;\n+\trequest->req_id = (uint16_t)0xDEADBEEF;\n+\n+\t/* set security policy params */\n+\trequest->ipsec_data.sp_destroy->table_id = is_v4 ?\n+\t\t\tVIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV4 :\n+\t\t\tVIRTCHNL_IPSEC_INBOUND_SPD_TBL_IPV6;\n+\trequest->ipsec_data.sp_destroy->rule_id = flow_id;\n+\n+\t/* send virtual channel request to add SA to hardware database */\n+\trc = iavf_ipsec_crypto_request(adapter,\n+\t\t\t(uint8_t *)request, request_len,\n+\t\t\t(uint8_t *)response, response_len);\n+\tif (rc)\n+\t\tgoto update_cleanup;\n+\n+\t/* verify response */\n+\tif (response->ipsec_opcode != request->ipsec_opcode ||\n+\t\tresponse->req_id != request->req_id)\n+\t\trc = -EFAULT;\n+\telse\n+\t\treturn response->ipsec_data.ipsec_status->status;\n+\n+update_cleanup:\n+\trte_free(request);\n+\trte_free(response);\n+\n+\treturn rc;\n+}\n+\n+static uint32_t\n+iavf_ipsec_crypto_sa_del(struct iavf_adapter *adapter,\n+\tstruct iavf_security_session *sess)\n+{\n+\tstruct inline_ipsec_msg *request = NULL, *response = NULL;\n+\tsize_t request_len, response_len;\n+\n+\tint rc = 0;\n+\n+\trequest_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_sa_destroy);\n+\n+\trequest = rte_malloc(\"iavf-sa-del-request\", request_len, 0);\n+\tif (request == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\tresponse_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_resp);\n+\n+\tresponse = rte_malloc(\"iavf-sa-del-response\", response_len, 0);\n+\tif (response == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* set msg header params */\n+\trequest->ipsec_opcode = INLINE_IPSEC_OP_SA_DESTROY;\n+\trequest->req_id = (uint16_t)0xDEADBEEF;\n+\n+\t/**\n+\t * SA delete supports deletetion of 1-8 specified SA's or if the flag\n+\t * field is zero, all SA's associated with VF will be deleted.\n+\t */\n+\tif (sess) {\n+\t\trequest->ipsec_data.sa_destroy->flag = 0x1;\n+\t\trequest->ipsec_data.sa_destroy->sa_index[0] = sess->sa.hw_idx;\n+\t} else {\n+\t\trequest->ipsec_data.sa_destroy->flag = 0x0;\n+\t}\n+\n+\t/* send virtual channel request to add SA to hardware database */\n+\trc = iavf_ipsec_crypto_request(adapter,\n+\t\t\t(uint8_t *)request, request_len,\n+\t\t\t(uint8_t *)response, response_len);\n+\tif (rc)\n+\t\tgoto update_cleanup;\n+\n+\t/* verify response */\n+\tif (response->ipsec_opcode != request->ipsec_opcode ||\n+\t\tresponse->req_id != request->req_id)\n+\t\trc = -EFAULT;\n+\n+\t/**\n+\t * Delete status will be the same bitmask as sa_destroy request flag if\n+\t * deletes successful\n+\t */\n+\tif (request->ipsec_data.sa_destroy->flag !=\n+\t\t\tresponse->ipsec_data.ipsec_status->status)\n+\t\trc = -EFAULT;\n+\n+update_cleanup:\n+\trte_free(response);\n+\trte_free(request);\n+\n+\treturn rc;\n+}\n+\n+\n+static int\n+iavf_ipsec_crypto_session_destroy(void *device,\n+\t\tstruct rte_security_session *session)\n+{\n+\tstruct iavf_adapter *adapter = NULL;\n+\tstruct iavf_security_session *iavf_sess = NULL;\n+\tstruct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;\n+\tint ret;\n+\n+\tadapter = IAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);\n+\tiavf_sess = (struct iavf_security_session *)session->sess_private_data;\n+\n+\t/* verify we have a valid session and that it belong to this adapter */\n+\tif (unlikely(iavf_sess == NULL || iavf_sess->adapter != adapter))\n+\t\treturn -EINVAL;\n+\n+\tret = iavf_ipsec_crypto_sa_del(adapter, iavf_sess);\n+\trte_mempool_put(rte_mempool_from_obj(iavf_sess), (void *)iavf_sess);\n+\treturn ret;\n+}\n+\n+/**\n+ * Get ESP trailer from packet as well as calculate the total ESP trailer\n+ * length, which include padding, ESP trailer footer and the ICV\n+ */\n+static inline struct rte_esp_tail *\n+iavf_ipsec_crypto_get_esp_trailer(struct rte_mbuf *m,\n+\tstruct iavf_security_session *s, uint16_t *esp_trailer_length)\n+{\n+\tstruct rte_esp_tail *esp_trailer;\n+\n+\tuint16_t length = sizeof(struct rte_esp_tail) + s->icv_sz;\n+\tuint16_t offset = 0;\n+\n+\t/**\n+\t * The ICV will not be present in TSO packets as this is appended by\n+\t * hardware during segment generation\n+\t */\n+\tif (m->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG))\n+\t\tlength -=  s->icv_sz;\n+\n+\t*esp_trailer_length = length;\n+\n+\t/**\n+\t * Calculate offset in packet to ESP trailer header, this should be\n+\t * total packet length less the size of the ESP trailer plus the ICV\n+\t * length if it is present\n+\t */\n+\toffset = rte_pktmbuf_pkt_len(m) - length;\n+\n+\tif (m->nb_segs > 1) {\n+\t\t/* find segment which esp trailer is located */\n+\t\twhile (m->data_len < offset) {\n+\t\t\toffset -= m->data_len;\n+\t\t\tm = m->next;\n+\t\t}\n+\t}\n+\n+\tesp_trailer = rte_pktmbuf_mtod_offset(m, struct rte_esp_tail *, offset);\n+\n+\t*esp_trailer_length += esp_trailer->pad_len;\n+\n+\treturn esp_trailer;\n+}\n+\n+\n+static inline uint16_t\n+iavf_ipsec_crypto_compute_l4_payload_length(struct rte_mbuf *m,\n+\tstruct iavf_security_session *s, uint16_t esp_tlen)\n+{\n+\tuint16_t ol2_len = m->l2_len;\t/* MAC + VLAN */\n+\tuint16_t ol3_len = 0;\t\t/* ipv4/6 + ext hdrs */\n+\tuint16_t ol4_len = 0;\t\t/* UDP NATT */\n+\tuint16_t l3_len = 0;\t\t/* IPv4/6 + ext hdrs */\n+\tuint16_t l4_len = 0;\t\t/* TCP/UDP/STCP hdrs */\n+\tuint16_t esp_hlen = sizeof(struct rte_esp_hdr) + s->iv_sz;\n+\n+\tif (s->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)\n+\t\tol3_len = m->outer_l3_len;\n+\t\t/**<\n+\t\t * application provided l3len assumed to include length of\n+\t\t * ipv4/6 hdr + ext hdrs\n+\t\t */\n+\n+\tif (s->udp_encap.enabled)\n+\t\tol4_len = sizeof(struct rte_udp_hdr);\n+\n+\tl3_len = m->l3_len;\n+\tl4_len = m->l4_len;\n+\n+\treturn rte_pktmbuf_pkt_len(m) - (ol2_len + ol3_len + ol4_len +\n+\t\t\tesp_hlen + l3_len + l4_len + esp_tlen);\n+}\n+\n+\n+static int\n+iavf_ipsec_crypto_pkt_metadata_set(void *device,\n+\t\t\t struct rte_security_session *session,\n+\t\t\t struct rte_mbuf *m, void *params)\n+{\n+\tstruct rte_eth_dev *ethdev = device;\n+\tstruct iavf_adapter *adapter =\n+\t\t\tIAVF_DEV_PRIVATE_TO_ADAPTER(ethdev->data->dev_private);\n+\tstruct iavf_security_ctx *iavf_sctx = adapter->security_ctx;\n+\tstruct iavf_security_session *iavf_sess = session->sess_private_data;\n+\tstruct iavf_ipsec_crypto_pkt_metadata *md;\n+\tstruct rte_esp_tail *esp_tail;\n+\tuint64_t *sqn = params;\n+\tuint16_t esp_trailer_length;\n+\n+\t/* Check we have valid session and is associated with this device */\n+\tif (unlikely(iavf_sess == NULL || iavf_sess->adapter != adapter))\n+\t\treturn -EINVAL;\n+\n+\t/* Get dynamic metadata location from mbuf */\n+\tmd = RTE_MBUF_DYNFIELD(m, iavf_sctx->pkt_md_offset,\n+\t\tstruct iavf_ipsec_crypto_pkt_metadata *);\n+\n+\t/* Set immutatable metadata values from session template */\n+\tmemcpy(md, &iavf_sess->pkt_metadata_template,\n+\t\tsizeof(struct iavf_ipsec_crypto_pkt_metadata));\n+\n+\tesp_tail = iavf_ipsec_crypto_get_esp_trailer(m, iavf_sess,\n+\t\t\t&esp_trailer_length);\n+\n+\t/* Set per packet mutable metadata values */\n+\tmd->esp_trailer_len = esp_trailer_length;\n+\tmd->l4_payload_len = iavf_ipsec_crypto_compute_l4_payload_length(m,\n+\t\t\t\tiavf_sess, esp_trailer_length);\n+\tmd->next_proto = esp_tail->next_proto;\n+\n+\t/* If Extended SN in use set the upper 32-bits in metadata */\n+\tif (iavf_sess->esn.enabled && sqn != NULL)\n+\t\tmd->esn = (uint32_t)(*sqn >> 32);\n+\n+\treturn 0;\n+}\n+\n+static int\n+iavf_ipsec_crypto_device_capabilities_get(struct iavf_adapter *adapter,\n+\t\tstruct virtchnl_ipsec_cap *capability)\n+{\n+\t/* Perform pf-vf comms */\n+\tstruct inline_ipsec_msg *request = NULL, *response = NULL;\n+\tsize_t request_len, response_len;\n+\tint rc;\n+\n+\trequest_len = sizeof(struct inline_ipsec_msg);\n+\n+\trequest = rte_malloc(\"iavf-device-capability-request\", request_len, 0);\n+\tif (request == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\tresponse_len = sizeof(struct inline_ipsec_msg) +\n+\t\t\tsizeof(struct virtchnl_ipsec_cap);\n+\tresponse = rte_malloc(\"iavf-device-capability-response\",\n+\t\t\tresponse_len, 0);\n+\tif (response == NULL) {\n+\t\trc = -ENOMEM;\n+\t\tgoto update_cleanup;\n+\t}\n+\n+\t/* set msg header params */\n+\trequest->ipsec_opcode = INLINE_IPSEC_OP_GET_CAP;\n+\trequest->req_id = (uint16_t)0xDEADBEEF;\n+\n+\t/* send virtual channel request to add SA to hardware database */\n+\trc = iavf_ipsec_crypto_request(adapter,\n+\t\t\t(uint8_t *)request, request_len,\n+\t\t\t(uint8_t *)response, response_len);\n+\tif (rc)\n+\t\tgoto update_cleanup;\n+\n+\t/* verify response id */\n+\tif (response->ipsec_opcode != request->ipsec_opcode ||\n+\t\tresponse->req_id != request->req_id){\n+\t\trc = -EFAULT;\n+\t\tgoto update_cleanup;\n+\t}\n+\tmemcpy(capability, response->ipsec_data.ipsec_cap, sizeof(*capability));\n+\n+update_cleanup:\n+\trte_free(response);\n+\trte_free(request);\n+\n+\treturn rc;\n+}\n+\n+\n+enum rte_crypto_auth_algorithm auth_maptbl[] = {\n+\t/* Hash Algorithm */\n+\t[VIRTCHNL_HASH_NO_ALG] = RTE_CRYPTO_AUTH_NULL,\n+\t[VIRTCHNL_AES_CBC_MAC] = RTE_CRYPTO_AUTH_AES_CBC_MAC,\n+\t[VIRTCHNL_AES_CMAC] = RTE_CRYPTO_AUTH_AES_CMAC,\n+\t[VIRTCHNL_AES_GMAC] = RTE_CRYPTO_AUTH_AES_GMAC,\n+\t[VIRTCHNL_AES_XCBC_MAC] = RTE_CRYPTO_AUTH_AES_XCBC_MAC,\n+\t[VIRTCHNL_MD5_HMAC] = RTE_CRYPTO_AUTH_MD5_HMAC,\n+\t[VIRTCHNL_SHA1_HMAC] = RTE_CRYPTO_AUTH_SHA1_HMAC,\n+\t[VIRTCHNL_SHA224_HMAC] = RTE_CRYPTO_AUTH_SHA224_HMAC,\n+\t[VIRTCHNL_SHA256_HMAC] = RTE_CRYPTO_AUTH_SHA256_HMAC,\n+\t[VIRTCHNL_SHA384_HMAC] = RTE_CRYPTO_AUTH_SHA384_HMAC,\n+\t[VIRTCHNL_SHA512_HMAC] = RTE_CRYPTO_AUTH_SHA512_HMAC,\n+\t[VIRTCHNL_SHA3_224_HMAC] = RTE_CRYPTO_AUTH_SHA3_224_HMAC,\n+\t[VIRTCHNL_SHA3_256_HMAC] = RTE_CRYPTO_AUTH_SHA3_256_HMAC,\n+\t[VIRTCHNL_SHA3_384_HMAC] = RTE_CRYPTO_AUTH_SHA3_384_HMAC,\n+\t[VIRTCHNL_SHA3_512_HMAC] = RTE_CRYPTO_AUTH_SHA3_512_HMAC,\n+};\n+\n+static void\n+update_auth_capabilities(struct rte_cryptodev_capabilities *scap,\n+\t\tstruct virtchnl_algo_cap *acap)\n+{\n+\tstruct rte_cryptodev_symmetric_capability *capability = &scap->sym;\n+\n+\tscap->op = RTE_CRYPTO_OP_TYPE_SYMMETRIC;\n+\n+\tcapability->xform_type = RTE_CRYPTO_SYM_XFORM_AUTH;\n+\n+\tcapability->auth.algo = auth_maptbl[acap->algo_type];\n+\tcapability->auth.block_size = acap->block_size;\n+\n+\tcapability->auth.key_size.min = acap->min_key_size;\n+\tcapability->auth.key_size.max = acap->max_key_size;\n+\tcapability->auth.key_size.increment = acap->inc_key_size;\n+\n+\tcapability->auth.digest_size.min = acap->min_digest_size;\n+\tcapability->auth.digest_size.max = acap->max_digest_size;\n+\tcapability->auth.digest_size.increment = acap->inc_digest_size;\n+}\n+\n+enum rte_crypto_cipher_algorithm cipher_maptbl[] = {\n+\t/* Cipher Algorithm */\n+\t[VIRTCHNL_CIPHER_NO_ALG] = RTE_CRYPTO_CIPHER_NULL,\n+\t[VIRTCHNL_3DES_CBC] = RTE_CRYPTO_CIPHER_3DES_CBC,\n+\t[VIRTCHNL_AES_CBC] = RTE_CRYPTO_CIPHER_AES_CBC,\n+\t[VIRTCHNL_AES_CTR] = RTE_CRYPTO_CIPHER_AES_CTR,\n+};\n+\n+\n+static void\n+update_cipher_capabilities(struct rte_cryptodev_capabilities *scap,\n+\tstruct virtchnl_algo_cap *acap)\n+{\n+\tstruct rte_cryptodev_symmetric_capability *capability = &scap->sym;\n+\n+\tscap->op = RTE_CRYPTO_OP_TYPE_SYMMETRIC;\n+\n+\tcapability->xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER;\n+\n+\tcapability->cipher.algo = cipher_maptbl[acap->algo_type];\n+\n+\tcapability->cipher.block_size = acap->block_size;\n+\n+\tcapability->cipher.key_size.min = acap->min_key_size;\n+\tcapability->cipher.key_size.max = acap->max_key_size;\n+\tcapability->cipher.key_size.increment = acap->inc_key_size;\n+\n+\tcapability->cipher.iv_size.min = acap->min_iv_size;\n+\tcapability->cipher.iv_size.max = acap->max_iv_size;\n+\tcapability->cipher.iv_size.increment = acap->inc_iv_size;\n+}\n+\n+enum rte_crypto_aead_algorithm aead_maptbl[] = {\n+\t/* AEAD Algorithm */\n+\t[VIRTCHNL_AES_CCM] = RTE_CRYPTO_AEAD_AES_CCM,\n+\t[VIRTCHNL_AES_GCM] = RTE_CRYPTO_AEAD_AES_GCM,\n+\t[VIRTCHNL_CHACHA20_POLY1305] = RTE_CRYPTO_AEAD_CHACHA20_POLY1305,\n+};\n+\n+static void\n+update_aead_capabilities(struct rte_cryptodev_capabilities *scap,\n+\tstruct virtchnl_algo_cap *acap)\n+{\n+\tstruct rte_cryptodev_symmetric_capability *capability = &scap->sym;\n+\n+\tscap->op = RTE_CRYPTO_OP_TYPE_SYMMETRIC;\n+\n+\tcapability->xform_type = RTE_CRYPTO_SYM_XFORM_AEAD;\n+\n+\tcapability->aead.algo = aead_maptbl[acap->algo_type];\n+\n+\tcapability->aead.block_size = acap->block_size;\n+\n+\tcapability->aead.key_size.min = acap->min_key_size;\n+\tcapability->aead.key_size.max = acap->max_key_size;\n+\tcapability->aead.key_size.increment = acap->inc_key_size;\n+\n+\tcapability->aead.aad_size.min = acap->min_aad_size;\n+\tcapability->aead.aad_size.max = acap->max_aad_size;\n+\tcapability->aead.aad_size.increment = acap->inc_aad_size;\n+\n+\tcapability->aead.iv_size.min = acap->min_iv_size;\n+\tcapability->aead.iv_size.max = acap->max_iv_size;\n+\tcapability->aead.iv_size.increment = acap->inc_iv_size;\n+\n+\tcapability->aead.digest_size.min = acap->min_digest_size;\n+\tcapability->aead.digest_size.max = acap->max_digest_size;\n+\tcapability->aead.digest_size.increment = acap->inc_digest_size;\n+}\n+\n+\n+/**\n+ * Dynamically set crypto capabilities based on virtchannel IPsec\n+ * capabilities structure.\n+ */\n+int\n+iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx\n+\t\t*iavf_sctx, struct virtchnl_ipsec_cap *vch_cap)\n+{\n+\tstruct rte_cryptodev_capabilities *capabilities;\n+\tint i, j, number_of_capabilities = 0, ci = 0;\n+\n+\t/* Count the total number of crypto algorithms supported */\n+\tfor (i = 0; i < VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM; i++)\n+\t\tnumber_of_capabilities += vch_cap->cap[i].algo_cap_num;\n+\n+\t/**\n+\t * Allocate cryptodev capabilities structure for\n+\t * *number_of_capabilities* items plus one item to null terminate the\n+\t * array\n+\t */\n+\tcapabilities = rte_zmalloc(\"crypto_cap\",\n+\t\tsizeof(struct rte_cryptodev_capabilities) *\n+\t\t(number_of_capabilities + 1), 0);\n+\tcapabilities[number_of_capabilities].op = RTE_CRYPTO_OP_TYPE_UNDEFINED;\n+\n+\t/**\n+\t * Iterate over each virtchl crypto capability by crypto type and\n+\t * algorithm.\n+\t */\n+\tfor (i = 0; i < VIRTCHNL_IPSEC_MAX_CRYPTO_CAP_NUM; i++) {\n+\t\tfor (j = 0; j < vch_cap->cap[i].algo_cap_num; j++, ci++) {\n+\t\t\tswitch (vch_cap->cap[i].crypto_type) {\n+\t\t\tcase VIRTCHNL_AUTH:\n+\t\t\t\tupdate_auth_capabilities(&capabilities[ci],\n+\t\t\t\t\t&vch_cap->cap[i].algo_cap_list[j]);\n+\t\t\t\tbreak;\n+\t\t\tcase VIRTCHNL_CIPHER:\n+\t\t\t\tupdate_cipher_capabilities(&capabilities[ci],\n+\t\t\t\t\t&vch_cap->cap[i].algo_cap_list[j]);\n+\t\t\t\tbreak;\n+\t\t\tcase VIRTCHNL_AEAD:\n+\t\t\t\tupdate_aead_capabilities(&capabilities[ci],\n+\t\t\t\t\t&vch_cap->cap[i].algo_cap_list[j]);\n+\t\t\t\tbreak;\n+\t\t\tdefault:\n+\t\t\t\tcapabilities[ci].op =\n+\t\t\t\t\t\tRTE_CRYPTO_OP_TYPE_UNDEFINED;\n+\t\t\t\tbreak;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\tiavf_sctx->crypto_capabilities = capabilities;\n+\treturn 0;\n+}\n+\n+/**\n+ * Get security capabilities for device\n+ */\n+static const struct rte_security_capability *\n+iavf_ipsec_crypto_capabilities_get(void *device)\n+{\n+\tstruct rte_eth_dev *eth_dev = (struct rte_eth_dev *)device;\n+\tstruct iavf_adapter *adapter =\n+\t\tIAVF_DEV_PRIVATE_TO_ADAPTER(eth_dev->data->dev_private);\n+\tstruct iavf_security_ctx *iavf_sctx = adapter->security_ctx;\n+\tunsigned int i;\n+\n+\tstatic struct rte_security_capability iavf_security_capabilities[] = {\n+\t\t{ /* IPsec Inline Crypto ESP Tunnel Egress */\n+\t\t\t.action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,\n+\t\t\t.protocol = RTE_SECURITY_PROTOCOL_IPSEC,\n+\t\t\t.ipsec = {\n+\t\t\t\t.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,\n+\t\t\t\t.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,\n+\t\t\t\t.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,\n+\t\t\t\t.options = { .udp_encap = 1, .tso = 1,\n+\t\t\t\t\t\t.stats = 1, .esn = 1 },\n+\t\t\t},\n+\t\t\t.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA\n+\t\t},\n+\t\t{ /* IPsec Inline Crypto ESP Tunnel Ingress */\n+\t\t\t.action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,\n+\t\t\t.protocol = RTE_SECURITY_PROTOCOL_IPSEC,\n+\t\t\t.ipsec = {\n+\t\t\t\t.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,\n+\t\t\t\t.mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,\n+\t\t\t\t.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,\n+\t\t\t\t.options = { .udp_encap = 1, .tso = 1,\n+\t\t\t\t\t\t.stats = 1, .esn = 1 },\n+\t\t\t},\n+\t\t\t.ol_flags = 0\n+\t\t},\n+\t\t{ /* IPsec Inline Crypto ESP Transport Egress */\n+\t\t\t.action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,\n+\t\t\t.protocol = RTE_SECURITY_PROTOCOL_IPSEC,\n+\t\t\t.ipsec = {\n+\t\t\t\t.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,\n+\t\t\t\t.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,\n+\t\t\t\t.direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,\n+\t\t\t\t.options = { .udp_encap = 1, .tso = 1,\n+\t\t\t\t\t\t.stats = 1, .esn = 1 },\n+\t\t\t},\n+\t\t\t.ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA\n+\t\t},\n+\t\t{ /* IPsec Inline Crypto ESP Transport Ingress */\n+\t\t\t.action = RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO,\n+\t\t\t.protocol = RTE_SECURITY_PROTOCOL_IPSEC,\n+\t\t\t.ipsec = {\n+\t\t\t\t.proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,\n+\t\t\t\t.mode = RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT,\n+\t\t\t\t.direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,\n+\t\t\t\t.options = { .udp_encap = 1, .tso = 1,\n+\t\t\t\t\t\t.stats = 1, .esn = 1 }\n+\t\t\t},\n+\t\t\t.ol_flags = 0\n+\t\t},\n+\t\t{\n+\t\t\t.action = RTE_SECURITY_ACTION_TYPE_NONE\n+\t\t}\n+\t};\n+\n+\t/**\n+\t * Update the security capabilities struct with the runtime discovered\n+\t * crypto capabilities, except for last element of the array which is\n+\t * the null terminatation\n+\t */\n+\tfor (i = 0; i < ((sizeof(iavf_security_capabilities) /\n+\t\t\tsizeof(iavf_security_capabilities[0])) - 1); i++) {\n+\t\tiavf_security_capabilities[i].crypto_capabilities =\n+\t\t\tiavf_sctx->crypto_capabilities;\n+\t}\n+\n+\treturn iavf_security_capabilities;\n+}\n+\n+static struct rte_security_ops iavf_ipsec_crypto_ops = {\n+\t.session_get_size\t\t= iavf_ipsec_crypto_session_size_get,\n+\t.session_create\t\t\t= iavf_ipsec_crypto_session_create,\n+\t.session_update\t\t\t= iavf_ipsec_crypto_session_update,\n+\t.session_stats_get\t\t= iavf_ipsec_crypto_session_stats_get,\n+\t.session_destroy\t\t= iavf_ipsec_crypto_session_destroy,\n+\t.set_pkt_metadata\t\t= iavf_ipsec_crypto_pkt_metadata_set,\n+\t.get_userdata\t\t\t= NULL,\n+\t.capabilities_get\t\t= iavf_ipsec_crypto_capabilities_get,\n+};\n+\n+int\n+iavf_security_ctx_create(struct iavf_adapter *adapter)\n+{\n+\tstruct rte_security_ctx *sctx;\n+\n+\tsctx = rte_malloc(\"security_ctx\", sizeof(struct rte_security_ctx), 0);\n+\tif (sctx == NULL)\n+\t\treturn -ENOMEM;\n+\n+\tsctx->device = adapter->eth_dev;\n+\tsctx->ops = &iavf_ipsec_crypto_ops;\n+\tsctx->sess_cnt = 0;\n+\n+\tadapter->eth_dev->security_ctx = sctx;\n+\n+\tif (adapter->security_ctx == NULL) {\n+\t\tadapter->security_ctx = rte_malloc(\"iavf_security_ctx\",\n+\t\t\t\tsizeof(struct iavf_security_ctx), 0);\n+\t\tif (adapter->security_ctx == NULL)\n+\t\t\treturn -ENOMEM;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int\n+iavf_security_init(struct iavf_adapter *adapter)\n+{\n+\tstruct iavf_security_ctx *iavf_sctx = adapter->security_ctx;\n+\tstruct rte_mbuf_dynfield pkt_md_dynfield = {\n+\t\t.name = \"iavf_ipsec_crypto_pkt_metadata\",\n+\t\t.size = sizeof(struct iavf_ipsec_crypto_pkt_metadata),\n+\t\t.align = __alignof__(struct iavf_ipsec_crypto_pkt_metadata)\n+\t};\n+\tstruct virtchnl_ipsec_cap capabilities;\n+\tint rc;\n+\n+\tiavf_sctx->adapter = adapter;\n+\n+\tiavf_sctx->pkt_md_offset = rte_mbuf_dynfield_register(&pkt_md_dynfield);\n+\tif (iavf_sctx->pkt_md_offset < 0)\n+\t\treturn iavf_sctx->pkt_md_offset;\n+\n+\t/* Get device capabilities from Inline IPsec driver over PF-VF comms */\n+\trc = iavf_ipsec_crypto_device_capabilities_get(adapter, &capabilities);\n+\tif (rc)\n+\t\treturn rc;\n+\n+\treturn\tiavf_ipsec_crypto_set_security_capabililites(iavf_sctx,\n+\t\t\t&capabilities);\n+}\n+\n+int\n+iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter)\n+{\n+\tstruct iavf_security_ctx *iavf_sctx = adapter->security_ctx;\n+\n+\treturn iavf_sctx->pkt_md_offset;\n+}\n+\n+int\n+iavf_security_ctx_destroy(struct iavf_adapter *adapter)\n+{\n+\tstruct rte_security_ctx *sctx  = adapter->eth_dev->security_ctx;\n+\tstruct iavf_security_ctx *iavf_sctx = adapter->security_ctx;\n+\n+\tif (iavf_sctx == NULL)\n+\t\treturn -ENODEV;\n+\n+\t/* TODO: Add resources cleanup */\n+\n+\t/* free and reset security data structures */\n+\trte_free(iavf_sctx);\n+\trte_free(sctx);\n+\n+\tiavf_sctx = NULL;\n+\tsctx = NULL;\n+\n+\treturn 0;\n+}\n+\n+int\n+iavf_ipsec_crypto_supported(struct iavf_adapter *adapter)\n+{\n+\tstruct virtchnl_vf_resource *resources = adapter->vf.vf_res;\n+\n+\t/** Capability check for IPsec Crypto */\n+\tif (resources && (resources->vf_cap_flags &\n+\t\tVIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO))\n+\t\treturn true;\n+\n+\treturn false;\n+}\n+\n+\n+#define IAVF_IPSEC_INSET_ESP (\\\n+\tIAVF_INSET_ESP_SPI)\n+\n+#define IAVF_IPSEC_INSET_AH (\\\n+\tIAVF_INSET_AH_SPI)\n+\n+#define IAVF_IPSEC_INSET_IPV4_NATT_ESP (\\\n+\tIAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \\\n+\tIAVF_INSET_ESP_SPI)\n+\n+#define IAVF_IPSEC_INSET_IPV6_NATT_ESP (\\\n+\tIAVF_INSET_IPV6_SRC | IAVF_INSET_IPV6_DST | \\\n+\tIAVF_INSET_ESP_SPI)\n+\n+enum iavf_ipsec_flow_pt_type {\n+\tIAVF_PATTERN_ESP = 1,\n+\tIAVF_PATTERN_AH,\n+\tIAVF_PATTERN_UDP_ESP,\n+};\n+enum iavf_ipsec_flow_pt_ip_ver {\n+\tIAVF_PATTERN_IPV4 = 1,\n+\tIAVF_PATTERN_IPV6,\n+};\n+\n+#define IAVF_PATTERN(t, ipt) ((void *)((t) | ((ipt) << 4)))\n+#define IAVF_PATTERN_TYPE(pt) ((pt) & 0x0F)\n+#define IAVF_PATTERN_IP_V(pt) ((pt) >> 4)\n+\n+static struct iavf_pattern_match_item iavf_ipsec_flow_pattern[] = {\n+\t{iavf_pattern_eth_ipv4_esp,\tIAVF_IPSEC_INSET_ESP,\n+\t\t\tIAVF_PATTERN(IAVF_PATTERN_ESP, IAVF_PATTERN_IPV4)},\n+\t{iavf_pattern_eth_ipv6_esp,\tIAVF_IPSEC_INSET_ESP,\n+\t\t\tIAVF_PATTERN(IAVF_PATTERN_ESP, IAVF_PATTERN_IPV6)},\n+\t{iavf_pattern_eth_ipv4_ah,\tIAVF_IPSEC_INSET_AH,\n+\t\t\tIAVF_PATTERN(IAVF_PATTERN_AH, IAVF_PATTERN_IPV4)},\n+\t{iavf_pattern_eth_ipv6_ah,\tIAVF_IPSEC_INSET_AH,\n+\t\t\tIAVF_PATTERN(IAVF_PATTERN_AH, IAVF_PATTERN_IPV6)},\n+\t{iavf_pattern_eth_ipv4_udp_esp,\tIAVF_IPSEC_INSET_IPV4_NATT_ESP,\n+\t\t\tIAVF_PATTERN(IAVF_PATTERN_UDP_ESP, IAVF_PATTERN_IPV4)},\n+\t{iavf_pattern_eth_ipv6_udp_esp,\tIAVF_IPSEC_INSET_IPV6_NATT_ESP,\n+\t\t\tIAVF_PATTERN(IAVF_PATTERN_UDP_ESP, IAVF_PATTERN_IPV6)},\n+};\n+\n+struct iavf_ipsec_flow_item {\n+\tuint64_t id;\n+\tuint8_t is_ipv4;\n+\tuint32_t spi;\n+\tstruct rte_ether_hdr eth_hdr;\n+\tunion {\n+\t\tstruct rte_ipv4_hdr ipv4_hdr;\n+\t\tstruct rte_ipv6_hdr ipv6_hdr;\n+\t};\n+\tstruct rte_udp_hdr udp_hdr;\n+};\n+\n+static void\n+parse_eth_item(const struct rte_flow_item_eth *item,\n+\t\tstruct rte_ether_hdr *eth)\n+{\n+\tmemcpy(eth->s_addr.addr_bytes,\n+\t\t\titem->src.addr_bytes, sizeof(eth->s_addr));\n+\tmemcpy(eth->d_addr.addr_bytes,\n+\t\t\titem->dst.addr_bytes, sizeof(eth->d_addr));\n+}\n+\n+static void\n+parse_ipv4_item(const struct rte_flow_item_ipv4 *item,\n+\t\tstruct rte_ipv4_hdr *ipv4)\n+{\n+\tipv4->src_addr = item->hdr.src_addr;\n+\tipv4->dst_addr = item->hdr.dst_addr;\n+}\n+\n+static void\n+parse_ipv6_item(const struct rte_flow_item_ipv6 *item,\n+\t\tstruct rte_ipv6_hdr *ipv6)\n+{\n+\tmemcpy(ipv6->src_addr, item->hdr.src_addr, 16);\n+\tmemcpy(ipv6->dst_addr, item->hdr.dst_addr, 16);\n+}\n+\n+static void\n+parse_udp_item(const struct rte_flow_item_udp *item, struct rte_udp_hdr *udp)\n+{\n+\tudp->dst_port = item->hdr.dst_port;\n+\tudp->src_port = item->hdr.src_port;\n+}\n+\n+static int\n+has_security_action(const struct rte_flow_action actions[],\n+\tconst void **session)\n+{\n+\t/* only {SECURITY; END} supported */\n+\tif (actions[0].type == RTE_FLOW_ACTION_TYPE_SECURITY\n+\t                && actions[1].type == RTE_FLOW_ACTION_TYPE_END) {\n+\t\t*session = actions[0].conf;\n+\t\treturn true;\n+\t}\n+\treturn false;\n+}\n+\n+\n+static struct iavf_ipsec_flow_item *\n+iavf_ipsec_flow_item_parse(struct rte_eth_dev *ethdev,\n+\t\tconst struct rte_flow_item pattern[],\n+\t\tconst struct rte_flow_action actions[],\n+\t\tuint32_t type)\n+{\n+\tconst void *session;\n+\tstruct iavf_ipsec_flow_item\n+\t\t*ipsec_flow = rte_malloc(\"security-flow-rule\",\n+\t\tsizeof(struct iavf_ipsec_flow_item), 0);\n+\tenum iavf_ipsec_flow_pt_type p_type = IAVF_PATTERN_TYPE(type);\n+\tenum iavf_ipsec_flow_pt_ip_ver p_ip_type = IAVF_PATTERN_IP_V(type);\n+\n+\tif (ipsec_flow == NULL)\n+\t\treturn NULL;\n+\n+\tipsec_flow->is_ipv4 = (p_ip_type == IAVF_PATTERN_IPV4);\n+\n+\tif (pattern[0].spec)\n+\t\tparse_eth_item((const struct rte_flow_item_eth *)\n+\t\t\t\tpattern[0].spec, &ipsec_flow->eth_hdr);\n+\n+\tswitch (p_type) {\n+\tcase IAVF_PATTERN_ESP:\n+\t\tif (ipsec_flow->is_ipv4) {\n+\t\t\tparse_ipv4_item((const struct rte_flow_item_ipv4 *)\n+\t\t\t\t\tpattern[1].spec,\n+\t\t\t\t\t&ipsec_flow->ipv4_hdr);\n+\t\t} else {\n+\t\t\tparse_ipv6_item((const struct rte_flow_item_ipv6 *)\n+\t\t\t\t\tpattern[1].spec,\n+\t\t\t\t\t&ipsec_flow->ipv6_hdr);\n+\t\t}\n+\t\tipsec_flow->spi =\n+\t\t\t((const struct rte_flow_item_esp *)\n+\t\t\t\t\tpattern[2].spec)->hdr.spi;\n+\t\tbreak;\n+\tcase IAVF_PATTERN_AH:\n+\t\tif (ipsec_flow->is_ipv4) {\n+\t\t\tparse_ipv4_item((const struct rte_flow_item_ipv4 *)\n+\t\t\t\t\tpattern[1].spec,\n+\t\t\t\t\t&ipsec_flow->ipv4_hdr);\n+\t\t} else {\n+\t\t\tparse_ipv6_item((const struct rte_flow_item_ipv6 *)\n+\t\t\t\t\tpattern[1].spec,\n+\t\t\t\t\t&ipsec_flow->ipv6_hdr);\n+\t\t}\n+\t\tipsec_flow->spi =\n+\t\t\t((const struct rte_flow_item_ah *)\n+\t\t\t\t\tpattern[2].spec)->spi;\n+\t\tbreak;\n+\tcase IAVF_PATTERN_UDP_ESP:\n+\t\tif (ipsec_flow->is_ipv4) {\n+\t\t\tparse_ipv4_item((const struct rte_flow_item_ipv4 *)\n+\t\t\t\t\tpattern[1].spec,\n+\t\t\t\t\t&ipsec_flow->ipv4_hdr);\n+\t\t} else {\n+\t\t\tparse_ipv6_item((const struct rte_flow_item_ipv6 *)\n+\t\t\t\t\tpattern[1].spec,\n+\t\t\t\t\t&ipsec_flow->ipv6_hdr);\n+\t\t}\n+\t\tparse_udp_item((const struct rte_flow_item_udp *)\n+\t\t\t\tpattern[2].spec,\n+\t\t\t&ipsec_flow->udp_hdr);\n+\t\tipsec_flow->spi =\n+\t\t\t((const struct rte_flow_item_esp *)\n+\t\t\t\t\tpattern[3].spec)->hdr.spi;\n+\t\tbreak;\n+\tdefault:\n+\t\tgoto flow_cleanup;\n+\t}\n+\n+\n+\tif (!has_security_action(actions, &session))\n+\t\tgoto flow_cleanup;\n+\n+\tif (!iavf_ipsec_crypto_action_valid(ethdev, session,\n+\t\t\tipsec_flow->spi))\n+\t\tgoto flow_cleanup;\n+\n+\treturn ipsec_flow;\n+\n+flow_cleanup:\n+\trte_free(ipsec_flow);\n+\treturn NULL;\n+}\n+\n+\n+\n+static struct iavf_flow_parser iavf_ipsec_flow_parser;\n+\n+static int\n+iavf_ipsec_flow_init(struct iavf_adapter *ad)\n+{\n+\tstruct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(ad);\n+\tstruct iavf_flow_parser *parser;\n+\n+\tif (!vf->vf_res)\n+\t\treturn -EINVAL;\n+\n+\tif (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_INLINE_IPSEC_CRYPTO)\n+\t\tparser = &iavf_ipsec_flow_parser;\n+\telse\n+\t\treturn -ENOTSUP;\n+\n+\treturn iavf_register_parser(parser, ad);\n+}\n+\n+static void\n+iavf_ipsec_flow_uninit(struct iavf_adapter *ad)\n+{\n+\tiavf_unregister_parser(&iavf_ipsec_flow_parser, ad);\n+}\n+\n+static int\n+iavf_ipsec_flow_create(struct iavf_adapter *ad,\n+\t\tstruct rte_flow *flow,\n+\t\tvoid *meta,\n+\t\tstruct rte_flow_error *error)\n+{\n+\tstruct iavf_ipsec_flow_item *ipsec_flow = meta;\n+\tif (!ipsec_flow) {\n+\t\trte_flow_error_set(error, EINVAL,\n+\t\t\t\tRTE_FLOW_ERROR_TYPE_HANDLE, NULL,\n+\t\t\t\t\"NULL rule.\");\n+\t\treturn -rte_errno;\n+\t}\n+\n+\tif (ipsec_flow->is_ipv4) {\n+\t\tipsec_flow->id =\n+\t\t\tiavf_ipsec_crypto_inbound_security_policy_add(ad,\n+\t\t\tipsec_flow->spi,\n+\t\t\t1,\n+\t\t\tipsec_flow->ipv4_hdr.dst_addr,\n+\t\t\tNULL,\n+\t\t\t0);\n+\t} else {\n+\t\tipsec_flow->id =\n+\t\t\tiavf_ipsec_crypto_inbound_security_policy_add(ad,\n+\t\t\tipsec_flow->spi,\n+\t\t\t0,\n+\t\t\t0,\n+\t\t\tipsec_flow->ipv6_hdr.dst_addr,\n+\t\t\t0);\n+\t}\n+\n+\tif (ipsec_flow->id < 1) {\n+\t\trte_flow_error_set(error, EINVAL,\n+\t\t\t\tRTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,\n+\t\t\t\t\"Failed to add SA.\");\n+\t\treturn -rte_errno;\n+\t}\n+\n+\tflow->rule = ipsec_flow;\n+\n+\treturn 0;\n+}\n+\n+static int\n+iavf_ipsec_flow_destroy(struct iavf_adapter *ad,\n+\t\tstruct rte_flow *flow,\n+\t\tstruct rte_flow_error *error)\n+{\n+\tstruct iavf_ipsec_flow_item *ipsec_flow = flow->rule;\n+\tif (!ipsec_flow) {\n+\t\trte_flow_error_set(error, EINVAL,\n+\t\t\t\tRTE_FLOW_ERROR_TYPE_HANDLE, NULL,\n+\t\t\t\t\"NULL rule.\");\n+\t\treturn -rte_errno;\n+\t}\n+\n+\tiavf_ipsec_crypto_security_policy_delete(ad,\n+\t\t\tipsec_flow->is_ipv4, ipsec_flow->id);\n+\trte_free(ipsec_flow);\n+\treturn 0;\n+}\n+\n+static struct iavf_flow_engine iavf_ipsec_flow_engine = {\n+\t.init = iavf_ipsec_flow_init,\n+\t.uninit = iavf_ipsec_flow_uninit,\n+\t.create = iavf_ipsec_flow_create,\n+\t.destroy = iavf_ipsec_flow_destroy,\n+\t.type = IAVF_FLOW_ENGINE_IPSEC_CRYPTO,\n+};\n+\n+static int\n+iavf_ipsec_flow_parse(struct iavf_adapter *ad,\n+\t\t       struct iavf_pattern_match_item *array,\n+\t\t       uint32_t array_len,\n+\t\t       const struct rte_flow_item pattern[],\n+\t\t       const struct rte_flow_action actions[],\n+\t\t       void **meta,\n+\t\t       struct rte_flow_error *error)\n+{\n+\tstruct iavf_pattern_match_item *item = NULL;\n+\tint ret = -1;\n+\n+\titem = iavf_search_pattern_match_item(pattern, array, array_len, error);\n+\tif (item && item->meta) {\n+\t\tuint32_t type = (uint64_t)(item->meta);\n+\t\tstruct iavf_ipsec_flow_item *fi =\n+\t\t\t\tiavf_ipsec_flow_item_parse(ad->eth_dev,\n+\t\t\t\t\t\tpattern, actions, type);\n+\t\tif (fi && meta) {\n+\t\t\t*meta = fi;\n+\t\t\tret = 0;\n+\t\t}\n+\t}\n+\treturn ret;\n+}\n+\n+static struct iavf_flow_parser iavf_ipsec_flow_parser = {\n+\t.engine = &iavf_ipsec_flow_engine,\n+\t.array = iavf_ipsec_flow_pattern,\n+\t.array_len = RTE_DIM(iavf_ipsec_flow_pattern),\n+\t.parse_pattern_action = iavf_ipsec_flow_parse,\n+\t.stage = IAVF_FLOW_STAGE_IPSEC_CRYPTO,\n+};\n+\n+RTE_INIT(iavf_ipsec_flow_engine_register)\n+{\n+\tiavf_register_flow_engine(&iavf_ipsec_flow_engine);\n+}\n+\ndiff --git a/drivers/net/iavf/iavf_ipsec_crypto.h b/drivers/net/iavf/iavf_ipsec_crypto.h\nnew file mode 100644\nindex 0000000000..d8d7d6649e\n--- /dev/null\n+++ b/drivers/net/iavf/iavf_ipsec_crypto.h\n@@ -0,0 +1,96 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright(c) 2020 Intel Corporation\n+ */\n+\n+#ifndef _IAVF_IPSEC_CRYPTO_H_\n+#define _IAVF_IPSEC_CRYPTO_H_\n+\n+#include <rte_security.h>\n+\n+#include \"iavf.h\"\n+\n+/* IPsec Crypto Packet Metaday offload flags */\n+#define IAVF_IPSEC_CRYPTO_OL_FLAGS_IS_TUN\t\t(0x1 << 0)\n+#define IAVF_IPSEC_CRYPTO_OL_FLAGS_ESN\t\t\t(0x1 << 1)\n+#define IAVF_IPSEC_CRYPTO_OL_FLAGS_IPV6_EXT_HDRS\t(0x1 << 2)\n+#define IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT\t\t\t(0x1 << 3)\n+\n+/**\n+ * Packet metadata data structure used to hold parameters required by the iAVF\n+ * transmit data path. Parameters set for session by calling\n+ * rte_security_set_pkt_metadata() API.\n+ */\n+struct iavf_ipsec_crypto_pkt_metadata {\n+\tuint32_t sa_idx;                /* SA hardware index (20b/4B) */\n+\n+\tuint8_t ol_flags;\t\t/* flags (1B) */\n+\tuint8_t len_iv;\t\t\t/* IV length (2b/1B) */\n+\tuint8_t ctx_desc_ipsec_params;\t/* IPsec params for ctx desc (7b/1B) */\n+\tuint8_t esp_trailer_len;\t/* ESP trailer length (6b/1B) */\n+\n+\tuint16_t l4_payload_len;\t/* L4 payload length */\n+\tuint8_t ipv6_ext_hdrs_len;\t/* IPv6 extender headers len (5b/1B) */\n+\tuint8_t next_proto;\t\t/* Next Protocol (8b/1B) */\n+\n+\tuint32_t esn;\t\t        /* Extended Sequence Number (32b/4B) */\n+} __rte_packed;\n+\n+/**\n+ * Inline IPsec Crypto offload is supported\n+ */\n+int\n+iavf_ipsec_crypto_supported(struct iavf_adapter *adapter);\n+\n+/**\n+ * Create security context\n+ */\n+int iavf_security_ctx_create(struct iavf_adapter *adapter);\n+\n+/**\n+ * Create security context\n+ */\n+int iavf_security_init(struct iavf_adapter *adapter);\n+\n+/**\n+ * Set security capabilities\n+ */\n+int iavf_ipsec_crypto_set_security_capabililites(struct iavf_security_ctx\n+\t\t*iavf_sctx, struct virtchnl_ipsec_cap *virtchl_capabilities);\n+\n+\n+int iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter);\n+\n+/**\n+ * Destroy security context\n+ */\n+int iavf_security_ctx_destroy(struct iavf_adapter *adapterv);\n+\n+/**\n+ * Verify that the inline IPsec Crypto action is valid for this device\n+ */\n+uint32_t\n+iavf_ipsec_crypto_action_valid(struct rte_eth_dev *ethdev,\n+\tconst struct rte_security_session *session, uint32_t spi);\n+\n+/**\n+ * Add inbound security policy rule to hardware\n+ */\n+int\n+iavf_ipsec_crypto_inbound_security_policy_add(struct iavf_adapter *adapter,\n+\tuint32_t esp_spi,\n+\tuint8_t is_v4,\n+\trte_be32_t v4_dst_addr,\n+\tuint8_t *v6_dst_addr,\n+\tuint8_t drop);\n+\n+/**\n+ * Delete inbound security policy rule from hardware\n+ */\n+int\n+iavf_ipsec_crypto_security_policy_delete(struct iavf_adapter *adapter,\n+\tuint8_t is_v4, uint32_t flow_id);\n+\n+int\n+iavf_security_get_pkt_md_offset(struct iavf_adapter *adapter);\n+\n+#endif /* _IAVF_IPSEC_CRYPTO_H_ */\ndiff --git a/drivers/net/iavf/iavf_ipsec_crypto_capabilities.h b/drivers/net/iavf/iavf_ipsec_crypto_capabilities.h\nnew file mode 100644\nindex 0000000000..70ce8dd638\n--- /dev/null\n+++ b/drivers/net/iavf/iavf_ipsec_crypto_capabilities.h\n@@ -0,0 +1,383 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright(c) 2020 Intel Corporation\n+ */\n+\n+#ifndef _IAVF_IPSEC_CRYPTO_CAPABILITIES_H_\n+#define _IAVF_IPSEC_CRYPTO_CAPABILITIES_H_\n+\n+static const struct rte_cryptodev_capabilities iavf_crypto_capabilities[] = {\n+\t{\t/* SHA1 HMAC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,\n+\t\t\t\t.block_size = 64,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 1,\n+\t\t\t\t\t.max = 64,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 20,\n+\t\t\t\t\t.max = 20,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* SHA256 HMAC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,\n+\t\t\t\t.block_size = 64,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 1,\n+\t\t\t\t\t.max = 64,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 32,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* SHA384 HMAC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,\n+\t\t\t\t.block_size = 128,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 1,\n+\t\t\t\t\t.max = 128,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 48,\n+\t\t\t\t\t.max = 48,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* SHA512 HMAC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,\n+\t\t\t\t.block_size = 128,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 1,\n+\t\t\t\t\t.max = 128,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 64,\n+\t\t\t\t\t.max = 64,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* MD5 HMAC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_MD5_HMAC,\n+\t\t\t\t.block_size = 64,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 1,\n+\t\t\t\t\t.max = 64,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES XCBC MAC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.aad_size = { 0 },\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES GCM */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,\n+\t\t\t{.aead = {\n+\t\t\t\t.algo = RTE_CRYPTO_AEAD_AES_GCM,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 4\n+\t\t\t\t},\n+\t\t\t\t.aad_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 240,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 8,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* ChaCha20-Poly1305 */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,\n+\t\t\t{.aead = {\n+\t\t\t\t.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 32,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 4\n+\t\t\t\t},\n+\t\t\t\t.aad_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 240,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 12,\n+\t\t\t\t\t.max = 12,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES CCM */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,\n+\t\t\t{.aead = {\n+\t\t\t\t.algo = RTE_CRYPTO_AEAD_AES_CCM,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 4\n+\t\t\t\t},\n+\t\t\t\t.aad_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 240,\n+\t\t\t\t\t.increment = 1\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 12,\n+\t\t\t\t\t.max = 12,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES GMAC (AUTH) */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_AES_GMAC,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 4\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 12,\n+\t\t\t\t\t.max = 12,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES CMAC (AUTH) */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_AES_CMAC,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 4\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 12,\n+\t\t\t\t\t.max = 12,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES CBC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,\n+\t\t\t{.cipher = {\n+\t\t\t\t.algo = RTE_CRYPTO_CIPHER_AES_CBC,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 16,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* AES CTR */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,\n+\t\t\t{.cipher = {\n+\t\t\t\t.algo = RTE_CRYPTO_CIPHER_AES_CTR,\n+\t\t\t\t.block_size = 16,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 16,\n+\t\t\t\t\t.max = 32,\n+\t\t\t\t\t.increment = 8\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 8,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\t/* NULL (AUTH) */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,\n+\t\t\t{.auth = {\n+\t\t\t\t.algo = RTE_CRYPTO_AUTH_NULL,\n+\t\t\t\t.block_size = 1,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 0,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.digest_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 0,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = { 0 }\n+\t\t\t}, },\n+\t\t}, },\n+\t},\n+\t{\t/* NULL (CIPHER) */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,\n+\t\t\t{.cipher = {\n+\t\t\t\t.algo = RTE_CRYPTO_CIPHER_NULL,\n+\t\t\t\t.block_size = 1,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 0,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 0,\n+\t\t\t\t\t.max = 0,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, },\n+\t\t}, }\n+\t},\n+\t{\t/* 3DES CBC */\n+\t\t.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,\n+\t\t{.sym = {\n+\t\t\t.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,\n+\t\t\t{.cipher = {\n+\t\t\t\t.algo = RTE_CRYPTO_CIPHER_3DES_CBC,\n+\t\t\t\t.block_size = 8,\n+\t\t\t\t.key_size = {\n+\t\t\t\t\t.min = 24,\n+\t\t\t\t\t.max = 24,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t},\n+\t\t\t\t.iv_size = {\n+\t\t\t\t\t.min = 8,\n+\t\t\t\t\t.max = 8,\n+\t\t\t\t\t.increment = 0\n+\t\t\t\t}\n+\t\t\t}, }\n+\t\t}, }\n+\t},\n+\t{\n+\t\t.op = RTE_CRYPTO_OP_TYPE_UNDEFINED,\n+\t}\n+};\n+\n+\n+#endif /* _IAVF_IPSEC_CRYPTO_CAPABILITIES_H_ */\ndiff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c\nindex a84a0b07f6..3f8c0822b7 100644\n--- a/drivers/net/iavf/iavf_rxtx.c\n+++ b/drivers/net/iavf/iavf_rxtx.c\n@@ -27,6 +27,7 @@\n \n #include \"iavf.h\"\n #include \"iavf_rxtx.h\"\n+#include \"iavf_ipsec_crypto.h\"\n #include \"rte_pmd_iavf.h\"\n \n /* Offset of mbuf dynamic field for protocol extraction's metadata */\n@@ -39,6 +40,7 @@ uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;\n uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;\n uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;\n uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;\n+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;\n \n uint8_t\n iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)\n@@ -51,6 +53,8 @@ iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)\n \t\t[IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,\n \t\t[IAVF_PROTO_XTR_TCP]       = IAVF_RXDID_COMMS_AUX_TCP,\n \t\t[IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,\n+\t\t[IAVF_PROTO_XTR_IPSEC_CRYPTO_SAID] =\n+\t\t\t\tIAVF_RXDID_COMMS_IPSEC_CRYPTO,\n \t};\n \n \treturn flex_type < RTE_DIM(rxdid_map) ?\n@@ -504,6 +508,12 @@ iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)\n \t\trxq->rxd_to_pkt_fields =\n \t\t\tiavf_rxd_to_pkt_fields_by_comms_aux_v2;\n \t\tbreak;\n+\tcase IAVF_RXDID_COMMS_IPSEC_CRYPTO:\n+\t\trxq->xtr_ol_flag =\n+\t\t\trte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;\n+\t\trxq->rxd_to_pkt_fields =\n+\t\t\tiavf_rxd_to_pkt_fields_by_comms_aux_v2;\n+\t\tbreak;\n \tcase IAVF_RXDID_COMMS_OVS_1:\n \t\trxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;\n \t\tbreak;\n@@ -688,6 +698,8 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,\n \t\t       const struct rte_eth_txconf *tx_conf)\n {\n \tstruct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n+\tstruct iavf_adapter *adapter =\n+\t\tIAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);\n \tstruct iavf_info *vf =\n \t\tIAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);\n \tstruct iavf_tx_queue *txq;\n@@ -732,9 +744,9 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,\n \t\treturn -ENOMEM;\n \t}\n \n-\tif (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {\n+\tif (adapter->vf.vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) {\n \t\tstruct virtchnl_vlan_supported_caps *insertion_support =\n-\t\t\t&vf->vlan_v2_caps.offloads.insertion_support;\n+\t\t\t&adapter->vf.vlan_v2_caps.offloads.insertion_support;\n \t\tuint32_t insertion_cap;\n \n \t\tif (insertion_support->outer)\n@@ -758,6 +770,10 @@ iavf_dev_tx_queue_setup(struct rte_eth_dev *dev,\n \ttxq->offloads = offloads;\n \ttxq->tx_deferred_start = tx_conf->tx_deferred_start;\n \n+\tif (iavf_ipsec_crypto_supported(adapter))\n+\t\ttxq->ipsec_crypto_pkt_md_offset =\n+\t\t\tiavf_security_get_pkt_md_offset(adapter);\n+\n \t/* Allocate software ring */\n \ttxq->sw_ring =\n \t\trte_zmalloc_socket(\"iavf tx sw ring\",\n@@ -1075,6 +1091,70 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,\n #endif\n }\n \n+static inline void\n+iavf_flex_rxd_to_ipsec_crypto_said_get(struct rte_mbuf *mb,\n+\t\t\t  volatile union iavf_rx_flex_desc *rxdp)\n+{\n+\tvolatile struct iavf_32b_rx_flex_desc_comms_ipsec *desc =\n+\t\t(volatile struct iavf_32b_rx_flex_desc_comms_ipsec *)rxdp;\n+\n+\tmb->dynfield1[0] = desc->ipsec_said &\n+\t\t\t IAVF_RX_FLEX_DESC_IPSEC_CRYPTO_SAID_MASK;\n+\t}\n+\n+static inline void\n+iavf_flex_rxd_to_ipsec_crypto_status(struct rte_mbuf *mb,\n+\t\t\t  volatile union iavf_rx_flex_desc *rxdp,\n+\t\t\t  struct iavf_ipsec_crypto_stats *stats)\n+{\n+\tuint16_t status1 = rte_le_to_cpu_64(rxdp->wb.status_error1);\n+\n+\tif (status1 & BIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_PROCESSED)) {\n+\t\tuint16_t ipsec_status;\n+\n+\t\tmb->ol_flags |= PKT_RX_SEC_OFFLOAD;\n+\n+\t\tipsec_status = status1 &\n+\t\t\tIAVF_RX_FLEX_DESC_IPSEC_CRYPTO_STATUS_MASK;\n+\n+\n+\t\tif (unlikely(ipsec_status !=\n+\t\t\tIAVF_IPSEC_CRYPTO_STATUS_SUCCESS)) {\n+\t\t\tmb->ol_flags |= PKT_RX_SEC_OFFLOAD_FAILED;\n+\n+\t\t\tswitch (ipsec_status) {\n+\t\t\tcase IAVF_IPSEC_CRYPTO_STATUS_SAD_MISS:\n+\t\t\t\tstats->ierrors.sad_miss++;\n+\t\t\t\tbreak;\n+\t\t\tcase IAVF_IPSEC_CRYPTO_STATUS_NOT_PROCESSED:\n+\t\t\t\tstats->ierrors.not_processed++;\n+\t\t\t\tbreak;\n+\t\t\tcase IAVF_IPSEC_CRYPTO_STATUS_ICV_CHECK_FAIL:\n+\t\t\t\tstats->ierrors.icv_check++;\n+\t\t\t\tbreak;\n+\t\t\tcase IAVF_IPSEC_CRYPTO_STATUS_LENGTH_ERR:\n+\t\t\t\tstats->ierrors.ipsec_length++;\n+\t\t\t\tbreak;\n+\t\t\tcase IAVF_IPSEC_CRYPTO_STATUS_MISC_ERR:\n+\t\t\t\tstats->ierrors.misc++;\n+\t\t\t\tbreak;\n+}\n+\n+\t\t\tstats->ierrors.count++;\n+\t\t\treturn;\n+\t\t}\n+\n+\t\tstats->icount++;\n+\t\tstats->ibytes += rxdp->wb.pkt_len & 0x3FFF;\n+\n+\t\tif (rxdp->wb.rxdid == IAVF_RXDID_COMMS_IPSEC_CRYPTO &&\n+\t\t\tipsec_status !=\n+\t\t\t\tIAVF_IPSEC_CRYPTO_STATUS_SAD_MISS)\n+\t\t\tiavf_flex_rxd_to_ipsec_crypto_said_get(mb, rxdp);\n+\t}\n+}\n+\n+\n /* Translate the rx descriptor status and error fields to pkt flags */\n static inline uint64_t\n iavf_rxd_to_pkt_flags(uint64_t qword)\n@@ -1393,6 +1473,8 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,\n \t\trxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &\n \t\t\trte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];\n \t\tiavf_flex_rxd_to_vlan_tci(rxm, &rxd);\n+\t\tiavf_flex_rxd_to_ipsec_crypto_status(rxm, &rxd,\n+\t\t\t\t&rxq->stats.ipsec_crypto);\n \t\trxq->rxd_to_pkt_fields(rxq, rxm, &rxd);\n \t\tpkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);\n \t\trxm->ol_flags |= pkt_flags;\n@@ -1535,6 +1617,8 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,\n \t\tfirst_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &\n \t\t\trte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];\n \t\tiavf_flex_rxd_to_vlan_tci(first_seg, &rxd);\n+\t\tiavf_flex_rxd_to_ipsec_crypto_status(first_seg, &rxd,\n+\t\t\t\t&rxq->stats.ipsec_crypto);\n \t\trxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);\n \t\tpkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);\n \n@@ -1773,6 +1857,8 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)\n \t\t\tmb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &\n \t\t\t\trte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];\n \t\t\tiavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);\n+\t\t\tiavf_flex_rxd_to_ipsec_crypto_status(mb, &rxdp[j],\n+\t\t\t\t&rxq->stats.ipsec_crypto);\n \t\t\trxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);\n \t\t\tstat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);\n \t\t\tpkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);\n@@ -2085,6 +2171,18 @@ iavf_fill_ctx_desc_cmd_field(volatile uint64_t *field, struct rte_mbuf *m)\n \t*field |= cmd;\n }\n \n+static inline void\n+iavf_fill_ctx_desc_ipsec_field(volatile uint64_t *field,\n+\tstruct iavf_ipsec_crypto_pkt_metadata *ipsec_md)\n+{\n+\tuint64_t ipsec_field =\n+\t\t(uint64_t)ipsec_md->ctx_desc_ipsec_params <<\n+\t\t\tIAVF_TXD_CTX_QW1_IPSEC_PARAMS_CIPHERBLK_SHIFT;\n+\n+\t*field |= ipsec_field;\n+}\n+\n+\n static inline void\n iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t *qw0,\n \t\tconst struct rte_mbuf *m)\n@@ -2117,15 +2215,19 @@ iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t *qw0,\n \n static inline uint16_t\n iavf_fill_ctx_desc_segmentation_field(volatile uint64_t *field,\n-\tstruct rte_mbuf *m)\n+\tstruct rte_mbuf *m, struct iavf_ipsec_crypto_pkt_metadata *ipsec_md)\n {\n \tuint64_t segmentation_field = 0;\n \tuint64_t total_length = 0;\n \n-\ttotal_length = m->pkt_len - (m->l2_len + m->l3_len + m->l4_len);\n+\tif (m->ol_flags & PKT_TX_SEC_OFFLOAD) {\n+\t\ttotal_length = ipsec_md->l4_payload_len;\n+\t} else {\n+\t\ttotal_length = m->pkt_len - (m->l2_len + m->l3_len + m->l4_len);\n \n-\tif (m->ol_flags & PKT_TX_TUNNEL_MASK)\n-\t\ttotal_length -= m->outer_l3_len;\n+\t\tif (m->ol_flags & PKT_TX_TUNNEL_MASK)\n+\t\t\ttotal_length -= m->outer_l3_len;\n+\t}\n \n #ifdef RTE_LIBRTE_IAVF_DEBUG_TX\n \tif (!m->l4_len || !m->tso_segsz)\n@@ -2148,7 +2250,8 @@ iavf_fill_ctx_desc_segmentation_field(volatile uint64_t *field,\n \n static inline void\n iavf_fill_context_desc(volatile struct iavf_tx_context_desc *desc,\n-\tstruct rte_mbuf *m, uint16_t *tlen)\n+\tstruct rte_mbuf *m, struct iavf_ipsec_crypto_pkt_metadata *ipsec_md,\n+\tuint16_t *tlen)\n {\n \t/* fill descriptor type field */\n \tdesc->qw1 = IAVF_TX_DESC_DTYPE_CONTEXT;\n@@ -2158,8 +2261,12 @@ iavf_fill_context_desc(volatile struct iavf_tx_context_desc *desc,\n \n \t/* fill segmentation field */\n \tif (m->ol_flags & (PKT_TX_TCP_SEG | PKT_TX_UDP_SEG)) {\n+\t\t/* fill IPsec field */\n+\t\tif (m->ol_flags & PKT_TX_SEC_OFFLOAD)\n+\t\t\tiavf_fill_ctx_desc_ipsec_field(&desc->qw1, ipsec_md);\n+\n \t\t*tlen = iavf_fill_ctx_desc_segmentation_field(&desc->qw1,\n-\t\t\t\tm);\n+\t\t\t\tm, ipsec_md);\n \t}\n \n \t/* fill tunnelling field */\n@@ -2173,6 +2280,38 @@ iavf_fill_context_desc(volatile struct iavf_tx_context_desc *desc,\n }\n \n \n+static inline void\n+iavf_fill_ipsec_desc(volatile struct iavf_tx_ipsec_desc *desc,\n+\tconst struct iavf_ipsec_crypto_pkt_metadata *md, uint16_t *ipsec_len)\n+{\n+\tdesc->qw0 = rte_cpu_to_le_64(((uint64_t)md->l4_payload_len <<\n+\t\tIAVF_IPSEC_TX_DESC_QW0_L4PAYLEN_SHIFT) |\n+\t\t((uint64_t)md->esn << IAVF_IPSEC_TX_DESC_QW0_IPSECESN_SHIFT) |\n+\t\t((uint64_t)md->esp_trailer_len <<\n+\t\t\t\tIAVF_IPSEC_TX_DESC_QW0_TRAILERLEN_SHIFT));\n+\n+\tdesc->qw1 = rte_cpu_to_le_64(((uint64_t)md->sa_idx <<\n+\t\tIAVF_IPSEC_TX_DESC_QW1_IPSECSA_SHIFT) |\n+\t\t((uint64_t)md->next_proto <<\n+\t\t\t\tIAVF_IPSEC_TX_DESC_QW1_IPSECNH_SHIFT) |\n+\t\t((uint64_t)(md->len_iv & 0x3) <<\n+\t\t\t\tIAVF_IPSEC_TX_DESC_QW1_IVLEN_SHIFT) |\n+\t\t((uint64_t)(md->ol_flags & IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT ?\n+\t\t\t\t1ULL : 0ULL) <<\n+\t\t\t\tIAVF_IPSEC_TX_DESC_QW1_UDP_SHIFT) |\n+\t\t(uint64_t)IAVF_TX_DESC_DTYPE_IPSEC);\n+\n+\t/**\n+\t * TODO: Pre-calculate this in the Session initialization\n+\t *\n+\t * Calculate IPsec length required in data descriptor func when TSO\n+\t * offload is enabled\n+\t */\n+\t*ipsec_len = sizeof(struct rte_esp_hdr) + (md->len_iv >> 2) +\n+\t\t\t(md->ol_flags & IAVF_IPSEC_CRYPTO_OL_FLAGS_NATT ?\n+\t\t\tsizeof(struct rte_udp_hdr) : 0);\n+}\n+\n static inline void\n iavf_build_data_desc_cmd_offset_fields(volatile uint64_t *qw1,\n \t\tstruct rte_mbuf *m)\n@@ -2286,6 +2425,17 @@ iavf_fill_data_desc(volatile struct iavf_tx_desc *desc,\n }\n \n \n+static struct iavf_ipsec_crypto_pkt_metadata *\n+iavf_ipsec_crypto_get_pkt_metdata(const struct iavf_tx_queue *txq,\n+\t\tstruct rte_mbuf *m)\n+{\n+\tif (m->ol_flags & PKT_TX_SEC_OFFLOAD)\n+\t\treturn RTE_MBUF_DYNFIELD(m, txq->ipsec_crypto_pkt_md_offset,\n+\t\t\t\tstruct iavf_ipsec_crypto_pkt_metadata *);\n+\n+\treturn NULL;\n+}\n+\n /* TX function */\n uint16_t\n iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n@@ -2314,7 +2464,9 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n \n \tfor (idx = 0; idx < nb_pkts; idx++) {\n \t\tvolatile struct iavf_tx_desc *ddesc;\n-\t\tuint16_t nb_desc_ctx;\n+\t\tstruct iavf_ipsec_crypto_pkt_metadata *ipsec_md;\n+\n+\t\tuint16_t nb_desc_ctx, nb_desc_ipsec;\n \t\tuint16_t nb_desc_data, nb_desc_required;\n \t\tuint16_t tlen = 0, ipseclen = 0;\n \t\tuint64_t ddesc_template = 0;\n@@ -2324,16 +2476,23 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n \n \t\tRTE_MBUF_PREFETCH_TO_FREE(txe->mbuf);\n \n+\t\t/**\n+\t\t * Get metadata for ipsec crypto from mbuf dynamic fields if\n+\t\t * security offload is specified.\n+\t\t */\n+\t\tipsec_md = iavf_ipsec_crypto_get_pkt_metdata(txq, mb);\n+\n \t\tnb_desc_data = mb->nb_segs;\n \t\tnb_desc_ctx = !!(mb->ol_flags &\n \t\t\t(PKT_TX_TCP_SEG | PKT_TX_UDP_SEG | PKT_TX_TUNNEL_MASK));\n+\t\tnb_desc_ipsec = !!(mb->ol_flags & PKT_TX_SEC_OFFLOAD);\n \n \t\t/**\n \t\t * The number of descriptors that must be allocated for\n \t\t * a packet equals to the number of the segments of that\n \t\t * packet plus the context and ipsec descriptors if needed.\n \t\t */\n-\t\tnb_desc_required = nb_desc_data + nb_desc_ctx;\n+\t\tnb_desc_required = nb_desc_data + nb_desc_ctx + nb_desc_ipsec;\n \n \t\tdesc_idx_last = (uint16_t)(desc_idx + nb_desc_required - 1);\n \n@@ -2384,7 +2543,7 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n \t\t\t\ttxe->mbuf = NULL;\n \t\t\t}\n \n-\t\t\tiavf_fill_context_desc(ctx_desc, mb, &tlen);\n+\t\t\tiavf_fill_context_desc(ctx_desc, mb, ipsec_md, &tlen);\n \t\t\tIAVF_DUMP_TX_DESC(txq, ctx_desc, desc_idx);\n \n \t\t\ttxe->last_id = desc_idx_last;\n@@ -2392,8 +2551,28 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n \t\t\ttxe = txn;\n \t\t\t}\n \n+\t\tif (nb_desc_ipsec) {\n+\t\t\tvolatile struct iavf_tx_ipsec_desc *ipsec_desc =\n+\t\t\t\t(volatile struct iavf_tx_ipsec_desc *)\n+\t\t\t\t\t&txr[desc_idx];\n+\n+\t\t\ttxn = &txe_ring[txe->next_id];\n+\t\t\tRTE_MBUF_PREFETCH_TO_FREE(txn->mbuf);\n+\n+\t\t\tif (txe->mbuf) {\n+\t\t\t\trte_pktmbuf_free_seg(txe->mbuf);\n+\t\t\t\ttxe->mbuf = NULL;\n+\t\t}\n+\n+\t\t\tiavf_fill_ipsec_desc(ipsec_desc, ipsec_md, &ipseclen);\n+\n+\t\t\tIAVF_DUMP_TX_DESC(txq, ipsec_desc, desc_idx);\n+\n+\t\t\ttxe->last_id = desc_idx_last;\n+\t\t\tdesc_idx = txe->next_id;\n+\t\t\ttxe = txn;\n+\t\t}\n \n-\t\t\n \t\tmb_seg = mb;\n \n \t\tdo {\ndiff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h\nindex 1bc47614ea..e009387aff 100644\n--- a/drivers/net/iavf/iavf_rxtx.h\n+++ b/drivers/net/iavf/iavf_rxtx.h\n@@ -25,7 +25,8 @@\n \n #define IAVF_TX_NO_VECTOR_FLAGS (\t\t\t\t \\\n \t\tDEV_TX_OFFLOAD_MULTI_SEGS |\t\t \\\n-\t\tDEV_TX_OFFLOAD_TCP_TSO)\n+\t\tDEV_TX_OFFLOAD_TCP_TSO |\t\t \\\n+\t\tDEV_TX_OFFLOAD_SECURITY)\n \n #define IAVF_TX_VECTOR_OFFLOAD (\t\t\t\t \\\n \t\tDEV_TX_OFFLOAD_VLAN_INSERT |\t\t \\\n@@ -47,7 +48,7 @@\n #define DEFAULT_TX_RS_THRESH     32\n #define DEFAULT_TX_FREE_THRESH   32\n \n-#define IAVF_MIN_TSO_MSS          88\n+#define IAVF_MIN_TSO_MSS          256\n #define IAVF_MAX_TSO_MSS          9668\n #define IAVF_TSO_MAX_SEG          UINT8_MAX\n #define IAVF_TX_MAX_MTU_SEG       8\n@@ -65,7 +66,8 @@\n \t\tPKT_TX_VLAN_PKT |\t\t \\\n \t\tPKT_TX_IP_CKSUM |\t\t \\\n \t\tPKT_TX_L4_MASK |\t\t \\\n-\t\tPKT_TX_TCP_SEG)\n+\t\tPKT_TX_TCP_SEG |\t\t \\\n+\t\tDEV_TX_OFFLOAD_SECURITY)\n \n #define IAVF_TX_OFFLOAD_NOTSUP_MASK \\\n \t\t(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)\n@@ -163,6 +165,24 @@ struct iavf_txq_ops {\n \tvoid (*release_mbufs)(struct iavf_tx_queue *txq);\n };\n \n+struct iavf_ipsec_crypto_stats {\n+\tuint64_t icount;\n+\tuint64_t ibytes;\n+\tstruct {\n+\t\tuint64_t count;\n+\t\tuint64_t sad_miss;\n+\t\tuint64_t not_processed;\n+\t\tuint64_t icv_check;\n+\t\tuint64_t ipsec_length;\n+\t\tuint64_t misc;\n+\t} ierrors;\n+};\n+\n+struct iavf_rx_queue_stats {\n+\tuint64_t reserved;\n+\tstruct iavf_ipsec_crypto_stats ipsec_crypto;\n+};\n+\n /* Structure associated with each Rx queue. */\n struct iavf_rx_queue {\n \tstruct rte_mempool *mp;       /* mbuf pool to populate Rx ring */\n@@ -211,6 +231,7 @@ struct iavf_rx_queue {\n \t\t/* flexible descriptor metadata extraction offload flag */\n \tiavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;\n \t\t\t\t/* handle flexible descriptor by RXDID */\n+\tstruct iavf_rx_queue_stats stats;\n \tuint64_t offloads;\n };\n \n@@ -245,6 +266,7 @@ struct iavf_tx_queue {\n \tuint64_t offloads;\n \tuint16_t next_dd;              /* next to set RS, for VPMD */\n \tuint16_t next_rs;              /* next to check DD,  for VPMD */\n+\tuint16_t ipsec_crypto_pkt_md_offset;\n \n \tbool q_set;                    /* if rx queue has been configured */\n \tbool tx_deferred_start;        /* don't start this queue in dev start */\n@@ -347,6 +369,40 @@ struct iavf_32b_rx_flex_desc_comms_ovs {\n \t} flex_ts;\n };\n \n+/* Rx Flex Descriptor\n+ * RxDID Profile ID 24 Inline IPsec\n+ * Flex-field 0: RSS hash lower 16-bits\n+ * Flex-field 1: RSS hash upper 16-bits\n+ * Flex-field 2: Flow ID lower 16-bits\n+ * Flex-field 3: Flow ID upper 16-bits\n+ * Flex-field 4: Inline IPsec SAID lower 16-bits\n+ * Flex-field 5: Inline IPsec SAID upper 16-bits\n+ */\n+struct iavf_32b_rx_flex_desc_comms_ipsec {\n+\t/* Qword 0 */\n+\tu8 rxdid;\n+\tu8 mir_id_umb_cast;\n+\t__le16 ptype_flexi_flags0;\n+\t__le16 pkt_len;\n+\t__le16 hdr_len_sph_flex_flags1;\n+\n+\t/* Qword 1 */\n+\t__le16 status_error0;\n+\t__le16 l2tag1;\n+\t__le32 rss_hash;\n+\n+\t/* Qword 2 */\n+\t__le16 status_error1;\n+\tu8 flexi_flags2;\n+\tu8 ts_low;\n+\t__le16 l2tag2_1st;\n+\t__le16 l2tag2_2nd;\n+\n+\t/* Qword 3 */\n+\t__le32 flow_id;\n+\t__le32 ipsec_said;\n+};\n+\n /* Receive Flex Descriptor profile IDs: There are a total\n  * of 64 profiles where profile IDs 0/1 are for legacy; and\n  * profiles 2-63 are flex profiles that can be programmed\n@@ -366,6 +422,7 @@ enum iavf_rxdid {\n \tIAVF_RXDID_COMMS_AUX_TCP\t= 21,\n \tIAVF_RXDID_COMMS_OVS_1\t\t= 22,\n \tIAVF_RXDID_COMMS_OVS_2\t\t= 23,\n+\tIAVF_RXDID_COMMS_IPSEC_CRYPTO\t= 24,\n \tIAVF_RXDID_COMMS_AUX_IP_OFFSET\t= 25,\n \tIAVF_RXDID_LAST\t\t\t= 63,\n };\n@@ -393,9 +450,13 @@ enum iavf_rx_flex_desc_status_error_0_bits {\n \n enum iavf_rx_flex_desc_status_error_1_bits {\n \t/* Note: These are predefined bit offsets */\n-\tIAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */\n-\tIAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,\n-\tIAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,\n+\t/* Bits 3:0 are reserved for inline ipsec status */\n+\tIAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_0 = 0,\n+\tIAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_1,\n+\tIAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_2,\n+\tIAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_3,\n+\tIAVF_RX_FLEX_DESC_STATUS1_NAT_S,\n+\tIAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_PROCESSED,\n \t/* [10:6] reserved */\n \tIAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,\n \tIAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,\n@@ -405,6 +466,24 @@ enum iavf_rx_flex_desc_status_error_1_bits {\n \tIAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */\n };\n \n+#define IAVF_RX_FLEX_DESC_IPSEC_CRYPTO_STATUS_MASK  (\t\t\\\n+\tBIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_0) |\t\\\n+\tBIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_1) |\t\\\n+\tBIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_2) |\t\\\n+\tBIT(IAVF_RX_FLEX_DESC_STATUS1_IPSEC_CRYPTO_STATUS_3))\n+\n+enum iavf_rx_flex_desc_ipsec_crypto_status {\n+\tIAVF_IPSEC_CRYPTO_STATUS_SUCCESS = 0,\n+\tIAVF_IPSEC_CRYPTO_STATUS_SAD_MISS,\n+\tIAVF_IPSEC_CRYPTO_STATUS_NOT_PROCESSED,\n+\tIAVF_IPSEC_CRYPTO_STATUS_ICV_CHECK_FAIL,\n+\tIAVF_IPSEC_CRYPTO_STATUS_LENGTH_ERR,\n+\t/* Reserved */\n+\tIAVF_IPSEC_CRYPTO_STATUS_MISC_ERR = 0xF\n+};\n+\n+#define IAVF_RX_FLEX_DESC_IPSEC_CRYPTO_SAID_MASK\t(0xFFFFF)\n+\n /* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */\n #define IAVF_RX_FLEX_DESC_PTYPE_M\t(0x3FF) /* 10-bits */\n \n@@ -565,6 +644,9 @@ void iavf_dump_tx_descriptor(const struct iavf_tx_queue *txq,\n \tcase IAVF_TX_DESC_DTYPE_CONTEXT:\n \t\tname = \"Tx_context_desc\";\n \t\tbreak;\n+\tcase IAVF_TX_DESC_DTYPE_IPSEC:\n+\t\tname = \"Tx_IPsec_desc\";\n+\t\tbreak;\n \tdefault:\n \t\tname = \"unknown_desc\";\n \t\tbreak;\ndiff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c\nindex 5c62443999..d99b03c8b2 100644\n--- a/drivers/net/iavf/iavf_vchnl.c\n+++ b/drivers/net/iavf/iavf_vchnl.c\n@@ -1767,3 +1767,32 @@ iavf_get_max_rss_queue_region(struct iavf_adapter *adapter)\n \treturn 0;\n }\n \n+\n+\n+int\n+iavf_ipsec_crypto_request(struct iavf_adapter *adapter,\n+\t\tuint8_t *msg, size_t msg_len,\n+\t\tuint8_t *resp_msg, size_t resp_msg_len)\n+{\n+\tstruct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(adapter);\n+\tstruct iavf_cmd_info args;\n+\tint err;\n+\n+\targs.ops = VIRTCHNL_OP_INLINE_IPSEC_CRYPTO;\n+\targs.in_args = msg;\n+\targs.in_args_size = msg_len;\n+\targs.out_buffer = vf->aq_resp;\n+\targs.out_size = IAVF_AQ_BUF_SZ;\n+\n+\terr = iavf_execute_vf_cmd(adapter, &args, 1);\n+\tif (err) {\n+\t\tPMD_DRV_LOG(ERR, \"fail to execute command %s\",\n+\t\t\t\t\"OP_INLINE_IPSEC_CRYPTO\");\n+\t\treturn err;\n+\t}\n+\n+\tmemcpy(resp_msg, args.out_buffer, resp_msg_len);\n+\n+\treturn 0;\n+}\n+\ndiff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build\nindex f2010a8337..385770b043 100644\n--- a/drivers/net/iavf/meson.build\n+++ b/drivers/net/iavf/meson.build\n@@ -10,7 +10,7 @@ endif\n cflags += ['-Wno-strict-aliasing']\n \n includes += include_directories('../../common/iavf')\n-deps += ['common_iavf']\n+deps += ['common_iavf', 'security', 'cryptodev']\n \n sources = files(\n         'iavf_ethdev.c',\n@@ -20,6 +20,7 @@ sources = files(\n         'iavf_fdir.c',\n         'iavf_hash.c',\n         'iavf_tm.c',\n+        'iavf_ipsec_crypto.c',\n )\n \n if arch_subdir == 'x86'\ndiff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h\nindex 3a045040f1..7426eb9be3 100644\n--- a/drivers/net/iavf/rte_pmd_iavf.h\n+++ b/drivers/net/iavf/rte_pmd_iavf.h\n@@ -92,6 +92,7 @@ extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;\n extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;\n extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;\n extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;\n+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;\n \n /**\n  * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.\ndiff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map\nindex f3efe756cf..97f0f87311 100644\n--- a/drivers/net/iavf/version.map\n+++ b/drivers/net/iavf/version.map\n@@ -13,4 +13,7 @@ EXPERIMENTAL {\n \trte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;\n \trte_pmd_ifd_dynflag_proto_xtr_tcp_mask;\n \trte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;\n+\n+\t# added in 21.11\n+\trte_pmd_ifd_dynflag_proto_xtr_ipsec_crypto_said_mask;\n };\n",
    "prefixes": [
        "v3",
        "4/6"
    ]
}