From patchwork Tue Apr 18 09:23:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 126229 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C778F4297B; Tue, 18 Apr 2023 11:24:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CD50842B8E; Tue, 18 Apr 2023 11:24:04 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2056.outbound.protection.outlook.com [40.107.237.56]) by mails.dpdk.org (Postfix) with ESMTP id F027242D0E for ; Tue, 18 Apr 2023 11:24:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TryiipML8KQaummkflb3D0wUPV03XC0AeWfV2ChqK7atTa66zKO54VXM35pIeHRjHx9qQY8YWuwg/WOKELjlBVClHC4tp7fQm/aii5qDkGNTU/mlHW5MRHu2D73Rs9UDucCIR2hZZFRa3K4MkhAwgFldMKkwLuqQ3wn9A4t//c4rJkx0hdhHlpX6xbU+eF8VJYmR+AVfRSD+fKcmUIi1v+AVPhpivBsrVjCjL30a3TrVn+rO2ORBpGOER0IduiA9vO/wC33JYhHYRkFVfFYifRWVfFHBSWvPe08p/Tt8y3KAqiKpJMrXh1KeuDnFGccQY8JDDP6bJzWxfDFWRa0s8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=535JhXs4To0/u+1CFLO8yTckJ0XpI7rZA3rIn8bYGWc=; b=aEiJ3B2d8t1gRJOY4mTVc5jNFiqmgVUB5R3UmeCClMVNgpkQCaghTnrxqALEw5nceWMZddbHOjrQyiph0xn3XfphspYI80tl2ImlorW74JLPG7uABHFB6hu3vP3o4433RjuKwmKY3FgVkt1Oleoz7HIMKnJ0S4jm+pe+G0E2UMZT0VEHg1OiGwH2gcVgZnRAWmBtqrhpp68Dk58G5b8b6LllihNM3QT4qj1OvZsoYOOjbdNHBNfOeYSsLWBbxCRJJrHkFgWzirPRvTnvezuhZCzbQLymDCocpdhv+YtogDnhF+QMWLKcmQWI+kozi91a//QGj0BU6kZNWPpt5gXssQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=535JhXs4To0/u+1CFLO8yTckJ0XpI7rZA3rIn8bYGWc=; b=RVC88vDjso74gRjVyCTzLtbKI17YVa+CKqiq/U3DbxkcDXtFWNpTxVamvLYpDrwGk2O2weuGM87lFzPpcg5bDG/Hn5y0AUZZUbrBkZc4sLAGXTrtXEThjUYaLUxjrcoAIedbF09uYlGA+6IYkwhR1HgmHXgvWo9vngjvvul99SizwC+d95hO8hevLLCcxluO4EP0m3urB4qsr+qNu0Ww4W4ojuQmQ+nKPPyJ1GlgUb1rECT18/3V/9I0KLzvnn8Pb4o0iIytmua+V1hVq3E5tEbOdrhXWnFSf0jkVXMgg3AD8cPCMwE8XACy3ar6U0bLLUQwiRgD2O1pW5psuL905g== Received: from BN9PR03CA0918.namprd03.prod.outlook.com (2603:10b6:408:107::23) by PH7PR12MB7235.namprd12.prod.outlook.com (2603:10b6:510:206::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 09:24:01 +0000 Received: from BN8NAM11FT056.eop-nam11.prod.protection.outlook.com (2603:10b6:408:107:cafe::65) by BN9PR03CA0918.outlook.office365.com (2603:10b6:408:107::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Tue, 18 Apr 2023 09:24:00 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT056.mail.protection.outlook.com (10.13.177.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 09:24:00 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 02:23:47 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 02:23:44 -0700 From: Suanming Mou To: CC: , , Subject: [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability Date: Tue, 18 Apr 2023 12:23:21 +0300 Message-ID: <20230418092325.2578712-2-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230418092325.2578712-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT056:EE_|PH7PR12MB7235:EE_ X-MS-Office365-Filtering-Correlation-Id: c60c1f94-64f7-44d2-9cf9-08db3feea4e2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Jo+vdK8rBPIxvwLHpmxuBclIZw/CqFGGkzgDcSfpsbFLVFcaoJuCMaVmeaM/wutpxeRx7Gur4eICZ4Ka/zCoVIR9raAqUuU5IujiOrrRnJke5l1z6bCYxO+Sl3oFdQ6p5fonV5u6ElQw5ZqeK9UA23KEVL7w9/zKdDRQUm194Su9rQ8eije2QMS4+zCgTFj+fyQs0OXeuUJIudRI4jRTqIAhLXw3kzp+CTuIzUhp15op17GfWZ174OsFUYq+9qEtF9pQ4EAQSSZej4TasdI3dTjTy0jYTIoTPGkwjZDcfa2ljWP6wacef068GC/81jBeovkrYxOVBYYDDOu+o+5pY0AD3N8/ONeRdnT1QORXbWOyr0psvf5RMzeUgCA2fs3QIwvkykgxtYRIFOdhW8IfInqLGTx3vaHen4Ktb5QxCaXq/BmMDrQicN5mYxAvVMt9wyCz8xzW/U3DiZo39xatl61WXPpReYxFkp9naZxrI/uAdEXqyOQW7y6fHRjfPNJX1kBX7Oz/i83Xf1tXffY5A1m5OXVnIU/PPrxAgiRY16hxkTkWt0Gc7IloPOi7lIRQs7/Fa4MiDRc5CCQmDrb89KNAjyG0IK8q0kSep3gGksIGfZOY5HOHzGbS2EUI1sPPoorkPW3bXB79m7idkv23n8ccWVVW1AVjOsnXNglLJjHWEttkCnIWtFifYl9xdCzuX5ARqhKn68I+flztewYvvn/KrDVG9RZRVP7TN2HsQUbbhcOV0z9Bl2kR3VYtPKl78r+gpz4LfBtB5UJ8rbo9z3wf03dwQHO1Oely7md6Fto= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(39860400002)(376002)(346002)(136003)(396003)(451199021)(40470700004)(46966006)(36840700001)(356005)(82740400003)(7696005)(7636003)(34020700004)(6666004)(40460700003)(6636002)(37006003)(54906003)(2906002)(55016003)(478600001)(40480700001)(83380400001)(426003)(336012)(47076005)(186003)(2616005)(6286002)(16526019)(26005)(82310400005)(36860700001)(36756003)(30864003)(4326008)(5660300002)(70586007)(316002)(1076003)(70206006)(86362001)(41300700001)(8676002)(8936002)(6862004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:24:00.2107 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c60c1f94-64f7-44d2-9cf9-08db3feea4e2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT056.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7235 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org AES-GCM provides both authenticated encryption and the ability to check the integrity and authentication of additional authenticated data (AAD) that is sent in the clear. This commit adds the AES-GCM capability query and check. An new devarg "algo" is added to identify if the crypto PMD will be initialized as AES-GCM(algo=1) or AES-XTS(algo=0, default). Signed-off-by: Suanming Mou --- doc/guides/nics/mlx5.rst | 8 +++ drivers/common/mlx5/mlx5_devx_cmds.c | 17 +++++ drivers/common/mlx5/mlx5_devx_cmds.h | 14 ++++ drivers/common/mlx5/mlx5_prm.h | 19 ++++- drivers/crypto/mlx5/meson.build | 1 + drivers/crypto/mlx5/mlx5_crypto.c | 30 +++++++- drivers/crypto/mlx5/mlx5_crypto.h | 5 ++ drivers/crypto/mlx5/mlx5_crypto_gcm.c | 100 ++++++++++++++++++++++++++ 8 files changed, 189 insertions(+), 5 deletions(-) create mode 100644 drivers/crypto/mlx5/mlx5_crypto_gcm.c diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index 9d111ed436..5eb2150613 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -1270,6 +1270,14 @@ for an additional list of options shared with other mlx5 drivers. Set to zero by default. +- ``algo`` parameter [int] + + - 0. AES-XTS crypto. + + - 1. AES-GCM crypto. + + Set to zero(AES-XTS) by default. + Supported NICs -------------- diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 96d3e3e373..592a7cffdb 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1197,6 +1197,23 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->crypto_wrapped_import_method = !!(MLX5_GET(crypto_caps, hcattr, wrapped_import_method) & 1 << 2); + attr->sw_wrapped_dek = MLX5_GET(crypto_caps, hcattr, sw_wrapped_dek_key_purpose) ? + MLX5_GET(crypto_caps, hcattr, sw_wrapped_dek_new) : 0; + attr->crypto_mmo.crypto_mmo_qp = MLX5_GET(crypto_caps, hcattr, crypto_mmo_qp); + attr->crypto_mmo.gcm_256_encrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_256_encrypt); + attr->crypto_mmo.gcm_128_encrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_128_encrypt); + attr->crypto_mmo.gcm_256_decrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_256_decrypt); + attr->crypto_mmo.gcm_128_decrypt = + MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_128_decrypt); + attr->crypto_mmo.gcm_auth_tag_128 = + MLX5_GET(crypto_caps, hcattr, gcm_auth_tag_128); + attr->crypto_mmo.gcm_auth_tag_96 = + MLX5_GET(crypto_caps, hcattr, gcm_auth_tag_96); + attr->crypto_mmo.log_crypto_mmo_max_size = + MLX5_GET(crypto_caps, hcattr, log_crypto_mmo_max_size); } if (hca_cap_2_sup) { hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index e006a04d68..d640482346 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -153,6 +153,18 @@ struct mlx5_hca_ipsec_attr { struct mlx5_hca_ipsec_reformat_attr reformat_fdb; }; +__extension__ +struct mlx5_hca_crypto_mmo_attr { + uint32_t crypto_mmo_qp:1; + uint32_t gcm_256_encrypt:1; + uint32_t gcm_128_encrypt:1; + uint32_t gcm_256_decrypt:1; + uint32_t gcm_128_decrypt:1; + uint32_t gcm_auth_tag_128:1; + uint32_t gcm_auth_tag_96:1; + uint32_t log_crypto_mmo_max_size:6; +}; + /* ISO C restricts enumerator values to range of 'int' */ __extension__ enum { @@ -266,6 +278,7 @@ struct mlx5_hca_attr { uint32_t import_kek:1; /* General obj type IMPORT_KEK supported. */ uint32_t credential:1; /* General obj type CREDENTIAL supported. */ uint32_t crypto_login:1; /* General obj type CRYPTO_LOGIN supported. */ + uint32_t sw_wrapped_dek:16; /* DEKs wrapped by SW are supported */ uint32_t regexp_num_of_engines; uint32_t log_max_ft_sampler_num:8; uint32_t inner_ipv4_ihl:1; @@ -281,6 +294,7 @@ struct mlx5_hca_attr { struct mlx5_hca_flow_attr flow; struct mlx5_hca_flex_attr flex; struct mlx5_hca_ipsec_attr ipsec; + struct mlx5_hca_crypto_mmo_attr crypto_mmo; int log_max_qp_sz; int log_max_cq_sz; int log_max_qp; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 31db082c50..a3b85f514e 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -4654,7 +4654,9 @@ struct mlx5_ifc_crypto_caps_bits { u8 synchronize_dek[0x1]; u8 int_kek_manual[0x1]; u8 int_kek_auto[0x1]; - u8 reserved_at_6[0x12]; + u8 reserved_at_6[0xd]; + u8 sw_wrapped_dek_key_purpose[0x1]; + u8 reserved_at_14[0x4]; u8 wrapped_import_method[0x8]; u8 reserved_at_20[0x3]; u8 log_dek_max_alloc[0x5]; @@ -4671,8 +4673,19 @@ struct mlx5_ifc_crypto_caps_bits { u8 log_dek_granularity[0x5]; u8 reserved_at_68[0x3]; u8 log_max_num_int_kek[0x5]; - u8 reserved_at_70[0x10]; - u8 reserved_at_80[0x780]; + u8 sw_wrapped_dek_new[0x10]; + u8 reserved_at_80[0x80]; + u8 crypto_mmo_qp[0x1]; + u8 crypto_aes_gcm_256_encrypt[0x1]; + u8 crypto_aes_gcm_128_encrypt[0x1]; + u8 crypto_aes_gcm_256_decrypt[0x1]; + u8 crypto_aes_gcm_128_decrypt[0x1]; + u8 gcm_auth_tag_128[0x1]; + u8 gcm_auth_tag_96[0x1]; + u8 reserved_at_107[0x3]; + u8 log_crypto_mmo_max_size[0x6]; + u8 reserved_at_110[0x10]; + u8 reserved_at_120[0x6d0]; }; struct mlx5_ifc_crypto_commissioning_register_bits { diff --git a/drivers/crypto/mlx5/meson.build b/drivers/crypto/mlx5/meson.build index a830a4c7b9..930a31c795 100644 --- a/drivers/crypto/mlx5/meson.build +++ b/drivers/crypto/mlx5/meson.build @@ -15,6 +15,7 @@ endif sources = files( 'mlx5_crypto.c', + 'mlx5_crypto_gcm.c', 'mlx5_crypto_dek.c', ) diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 9dec1cfbe0..6963d8a9c9 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -23,6 +23,13 @@ #define MLX5_CRYPTO_MAX_QPS 128 #define MLX5_CRYPTO_MAX_SEGS 56 +enum mlx5_crypto_pmd_support_algo { + MLX5_CRYPTO_PMD_SUPPORT_ALGO_NULL, + MLX5_CRYPTO_PMD_SUPPORT_ALGO_AES_XTS, + MLX5_CRYPTO_PMD_SUPPORT_ALGO_AES_GCM, + MLX5_CRYPTO_PMD_SUPPORT_ALGO_MAX, +}; + #define MLX5_CRYPTO_FEATURE_FLAGS(wrapped_mode) \ (RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | \ RTE_CRYPTODEV_FF_IN_PLACE_SGL | RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | \ @@ -102,7 +109,7 @@ mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, dev_info->driver_id = mlx5_crypto_driver_id; dev_info->feature_flags = MLX5_CRYPTO_FEATURE_FLAGS(priv->is_wrapped_mode); - dev_info->capabilities = mlx5_crypto_caps; + dev_info->capabilities = priv->caps; dev_info->max_nb_queue_pairs = MLX5_CRYPTO_MAX_QPS; dev_info->min_mbuf_headroom_req = 0; dev_info->min_mbuf_tailroom_req = 0; @@ -749,6 +756,14 @@ mlx5_crypto_args_check_handler(const char *key, const char *val, void *opaque) attr->credential_pointer = (uint32_t)tmp; } else if (strcmp(key, "keytag") == 0) { devarg_prms->keytag = tmp; + } else if (strcmp(key, "algo") == 0) { + if (tmp == 1) { + devarg_prms->is_aes_gcm = 1; + } else if (tmp > 1) { + DRV_LOG(ERR, "Invalid algo."); + rte_errno = EINVAL; + return -rte_errno; + } } return 0; } @@ -765,6 +780,7 @@ mlx5_crypto_parse_devargs(struct mlx5_kvargs_ctrl *mkvlist, "keytag", "max_segs_num", "wcs_file", + "algo", NULL, }; @@ -895,7 +911,9 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, rte_errno = ENOTSUP; return -rte_errno; } - if (!cdev->config.hca_attr.crypto || !cdev->config.hca_attr.aes_xts) { + if (!cdev->config.hca_attr.crypto || + (!cdev->config.hca_attr.aes_xts && + !cdev->config.hca_attr.crypto_mmo.crypto_mmo_qp)) { DRV_LOG(ERR, "Not enough capabilities to support crypto " "operations, maybe old FW/OFED version?"); rte_errno = ENOTSUP; @@ -924,6 +942,14 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, priv->cdev = cdev; priv->crypto_dev = crypto_dev; priv->is_wrapped_mode = wrapped_mode; + priv->caps = mlx5_crypto_caps; + /* Init and override AES-GCM configuration. */ + if (devarg_prms.is_aes_gcm) { + ret = mlx5_crypto_gcm_init(priv); + if (ret) { + DRV_LOG(ERR, "Failed to init AES-GCM crypto."); + } + } if (mlx5_devx_uar_prepare(cdev, &priv->uar) != 0) { rte_cryptodev_pmd_destroy(priv->crypto_dev); return -1; diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index a2771b3dab..80c2cab0dd 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -31,6 +31,7 @@ struct mlx5_crypto_priv { struct mlx5_uar uar; /* User Access Region. */ uint32_t max_segs_num; /* Maximum supported data segs. */ struct mlx5_hlist *dek_hlist; /* Dek hash list. */ + const struct rte_cryptodev_capabilities *caps; struct rte_cryptodev_config dev_config; struct mlx5_devx_obj *login_obj; uint64_t keytag; @@ -68,6 +69,7 @@ struct mlx5_crypto_devarg_params { struct mlx5_devx_crypto_login_attr login_attr; uint64_t keytag; uint32_t max_segs_num; + uint32_t is_aes_gcm:1; }; int @@ -84,4 +86,7 @@ mlx5_crypto_dek_setup(struct mlx5_crypto_priv *priv); void mlx5_crypto_dek_unset(struct mlx5_crypto_priv *priv); +int +mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv); + #endif /* MLX5_CRYPTO_H_ */ diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c new file mode 100644 index 0000000000..d60ac379cf --- /dev/null +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -0,0 +1,100 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "mlx5_crypto_utils.h" +#include "mlx5_crypto.h" + +static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = { + { + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, + }, + { + .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, + } +}; + +static int +mlx5_crypto_generate_gcm_cap(struct mlx5_hca_crypto_mmo_attr *mmo_attr, + struct rte_cryptodev_capabilities *cap) +{ + /* Init key size. */ + if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt && + mmo_attr->gcm_256_encrypt && mmo_attr->gcm_256_decrypt) { + cap->sym.aead.key_size.min = 16; + cap->sym.aead.key_size.max = 32; + cap->sym.aead.key_size.increment = 16; + } else if (mmo_attr->gcm_256_encrypt && mmo_attr->gcm_256_decrypt) { + cap->sym.aead.key_size.min = 32; + cap->sym.aead.key_size.max = 32; + cap->sym.aead.key_size.increment = 0; + } else if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt) { + cap->sym.aead.key_size.min = 16; + cap->sym.aead.key_size.max = 16; + cap->sym.aead.key_size.increment = 0; + } else { + DRV_LOG(ERR, "No available AES-GCM encryption/decryption supported."); + return -1; + } + /* Init tag size. */ + if (mmo_attr->gcm_auth_tag_128 && mmo_attr->gcm_auth_tag_128) { + cap->sym.aead.digest_size.min = 8; + cap->sym.aead.digest_size.max = 16; + cap->sym.aead.digest_size.increment = 8; + } else if (mmo_attr->gcm_auth_tag_128) { + cap->sym.aead.digest_size.min = 8; + cap->sym.aead.digest_size.max = 8; + cap->sym.aead.digest_size.increment = 0; + } else if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt) { + cap->sym.aead.digest_size.min = 16; + cap->sym.aead.digest_size.max = 16; + cap->sym.aead.digest_size.increment = 0; + } else { + DRV_LOG(ERR, "No available AES-GCM tag size supported."); + return -1; + } + /* Init AAD size. */ + cap->sym.aead.aad_size.min = 0; + cap->sym.aead.aad_size.max = UINT16_MAX; + cap->sym.aead.aad_size.increment = 1; + /* Init IV size. */ + cap->sym.aead.iv_size.min = 12; + cap->sym.aead.iv_size.max = 12; + cap->sym.aead.iv_size.increment = 0; + /* Init left items. */ + cap->op = RTE_CRYPTO_OP_TYPE_SYMMETRIC; + cap->sym.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD; + cap->sym.aead.algo = RTE_CRYPTO_AEAD_AES_GCM; + return 0; +} + +int +mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) +{ + struct mlx5_common_device *cdev = priv->cdev; + int ret; + + /* Generate GCM capability. */ + ret = mlx5_crypto_generate_gcm_cap(&cdev->config.hca_attr.crypto_mmo, + mlx5_crypto_gcm_caps); + if (ret) { + DRV_LOG(ERR, "No enough AES-GCM cap."); + return -1; + } + priv->caps = mlx5_crypto_gcm_caps; + return 0; +} + From patchwork Tue Apr 18 09:23:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 126228 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8D87A4297B; Tue, 18 Apr 2023 11:24:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D57BC42D0C; Tue, 18 Apr 2023 11:24:02 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2044.outbound.protection.outlook.com [40.107.92.44]) by mails.dpdk.org (Postfix) with ESMTP id 6479640698 for ; Tue, 18 Apr 2023 11:24:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RDhRsYnzeISizYpfJ+VLHGvJsvU9l2qz/pPnUfy/bJNNYdkq7wOCXAG92uRe9ohKjbJbEdU6LHQGrIt7WUUQVSWsKWJmJij1Ay/W7idvm6wD8T2G/KK6QStDqlSr7OYk+czlc3qJ5yY9LmzYKiREzvR+vM6dnVwArLeI7SyNIDj7NYhPCiI9cNyJFHXtCZZ4KUFlXSywR+arMm4HBP5NbiLck+h1gfHseFV1TpNh5s2uXupP7YA53Z+jLd83KeqvKoyManCuxi/mwpavsX7wcCikMi4zsyqQhftst6ZFCKduzod5+dJoMOqnMMeGq6HObwkllcBtxro4Vuq3wmaXCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=34cXSMMQuTuGe3kBGKdzLx/qaItt0+7eludYyMeWbds=; b=H2VI0lqs5U4egx7Hz0M4dT4o1IQFbOWFhf9eimqrOq7Ia8zACsecoRt39W/Q+PPyhV0ad/yDfILZ3A3ZpPbtv+nuxau3PvGNRiHvRJWFn8M3IT3aal2iPF+MM+TsZrwWhdlDX1nWucs97cY1V11I4ESSogdAOj5INalkt5SVc43e+WIUWZCsE7sXi6PuNAKhIOS5hMdNbh369kO1rmCwQcLCKTIlbZiqa03YEnkkpTql44KHUdfYkMqMKnieYYuigXhHxDncIvgSNge/khdXZaF0rROXhoyAMyASOIVCdwCUMeUbPY5ASvWaNhDieICaydDbEsERDogsDeI1ZUgjQA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=34cXSMMQuTuGe3kBGKdzLx/qaItt0+7eludYyMeWbds=; b=tIuW2d1N7BMjXxRHHRN8H2fyjfyicQllCGOQu3I2lG9bV3xlwH32D2Vmzeb8MwIWHSLpNLBQxRJ2blExRbGzQybJLnOtWldTgXLuZwVIel0gZResu8K+8yfd4NoetgiHGGGDwwkNQmzjtztkZeJY6XnqzKP3FuNP8+JSTOenx64VlyChJ7eUEuDxzWgX0r7ENRPiP4k+dLyKeqb1kin1ue3QWtuKB4QZkpt9q6Re2llD7ZgGGgCD6ZbDB73Ng8q6/C8qh2my7yVYQT0I+YjUgvy9c+GJKUa0DvRX+pgLJyzm9mTE5KwLiSL3pbQaZ87GO+IY8MHPhLa2BfYzGOFErg== Received: from MW4PR03CA0063.namprd03.prod.outlook.com (2603:10b6:303:b6::8) by SJ1PR12MB6051.namprd12.prod.outlook.com (2603:10b6:a03:48a::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 09:23:59 +0000 Received: from CO1NAM11FT092.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::7d) by MW4PR03CA0063.outlook.office365.com (2603:10b6:303:b6::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 09:23:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT092.mail.protection.outlook.com (10.13.175.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 09:23:59 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 02:23:50 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 02:23:47 -0700 From: Suanming Mou To: CC: , , Subject: [RFC PATCH 2/5] crypto/mlx5: add AES-GCM encryption key Date: Tue, 18 Apr 2023 12:23:22 +0300 Message-ID: <20230418092325.2578712-3-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230418092325.2578712-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT092:EE_|SJ1PR12MB6051:EE_ X-MS-Office365-Filtering-Correlation-Id: a6cdef5a-ea0a-49ed-4114-08db3feea419 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GgTppgbcPZmWhYKIpjF5RCs3OUhTaoEHGL3LmpaTJK3MP3cp3JWdMehBJgSi08MoNb1k0r9w3LTsAHVaz0GKyhYg3puU3ftGXurJghttIVcD+SQM72+xEM4f9U27FmrUIpx4exNwbKSE2YrJVMbDzkbWxPkJQBRBRYyz376WwvMw9Y1Aap0aV8FfzeUy5l2OXEOZtihwv8/KaDkRHcwx1rUR8adVQ7zVZ3ufTT9mlmtsh8fi2ZT5BTn1/3+hm5p/imywzbQKCZYAzg9PuOlp5BpZo9KzufUFMCy1rP4S483T+GgRplp+tPvTf2G//2smsruDCMqp8EcsfDiyJSW8FAOzg+OYF4mqwlyWzzDKil4AtBCt47GUaUnxuTBsC2PYdEalGIHFkBk2ip2bgtQ4C1w/joVVK/KvRfzDioOA1RzYWpdSlxvd64QmeTlly4nx3OkDQfuD1fOpK7GpRMVbdosnBICCbphLOuH8ndXUHu4DtvqFFt2HJ6BOLYt7aCxb0oRDHs+FtgEsRBw/G9hrIMZIE76NG5c//ksDSnYe86W7VRP36RM8vqiFq0IhYOqJNqZ0VM9vdKVCbch4XtBax+2ygY2OBV31NL9fo1XcRB4k5ozrJSla2Sp4oT2dvtydVf1dH+mkM/o43M2/p0Mz1sjqkoevvrfr/cTIMUu8cPThiJ58TD70BE+5ng8D89jKAu8lhAcAUanTSAaJZgFB/bVVXqCdvG3mg7AzqHjCovpWUpHO3i85MRf2aBhK05IADjcBcW1S6Xy3fYAqsbhExcnPXnEtEYyc07K1w+NRjTM= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(39860400002)(376002)(346002)(136003)(451199021)(36840700001)(46966006)(40470700004)(36756003)(37006003)(54906003)(6636002)(4326008)(316002)(70586007)(70206006)(478600001)(7696005)(6666004)(40480700001)(55016003)(82310400005)(8936002)(8676002)(6862004)(5660300002)(41300700001)(30864003)(2906002)(82740400003)(34020700004)(356005)(86362001)(7636003)(426003)(336012)(2616005)(16526019)(1076003)(26005)(6286002)(186003)(40460700003)(36860700001)(47076005)(83380400001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:23:59.0004 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a6cdef5a-ea0a-49ed-4114-08db3feea419 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT092.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6051 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The crypto device requires the DEK(data encryption key) object for data encryption/decryption operation. This commit adds the AES-GCM DEK object management support. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_devx_cmds.c | 6 +- drivers/common/mlx5/mlx5_devx_cmds.h | 1 + drivers/common/mlx5/mlx5_prm.h | 6 +- drivers/crypto/mlx5/mlx5_crypto.c | 2 +- drivers/crypto/mlx5/mlx5_crypto.h | 3 +- drivers/crypto/mlx5/mlx5_crypto_dek.c | 157 ++++++++++++++++++++------ drivers/crypto/mlx5/mlx5_crypto_gcm.c | 2 + 7 files changed, 137 insertions(+), 40 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 592a7cffdb..8b51a75cc8 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -3166,10 +3166,14 @@ mlx5_devx_cmd_create_dek_obj(void *ctx, struct mlx5_devx_dek_attr *attr) ptr = MLX5_ADDR_OF(create_dek_in, in, dek); MLX5_SET(dek, ptr, key_size, attr->key_size); MLX5_SET(dek, ptr, has_keytag, attr->has_keytag); + MLX5_SET(dek, ptr, sw_wrapped, attr->sw_wrapped); MLX5_SET(dek, ptr, key_purpose, attr->key_purpose); MLX5_SET(dek, ptr, pd, attr->pd); MLX5_SET64(dek, ptr, opaque, attr->opaque); - key_addr = MLX5_ADDR_OF(dek, ptr, key); + if (attr->sw_wrapped) + key_addr = MLX5_ADDR_OF(dek, ptr, sw_wrapped_dek); + else + key_addr = MLX5_ADDR_OF(dek, ptr, key); memcpy(key_addr, (void *)(attr->key), MLX5_CRYPTO_KEY_MAX_SIZE); dek_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index d640482346..79502cda08 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -664,6 +664,7 @@ struct mlx5_devx_dek_attr { uint32_t key_size:4; uint32_t has_keytag:1; uint32_t key_purpose:4; + uint32_t sw_wrapped:1; uint32_t pd:24; uint64_t opaque; uint8_t key[MLX5_CRYPTO_KEY_MAX_SIZE]; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index a3b85f514e..9728be24dd 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3736,7 +3736,8 @@ enum { struct mlx5_ifc_dek_bits { u8 modify_field_select[0x40]; u8 state[0x8]; - u8 reserved_at_48[0xc]; + u8 sw_wrapped[0x1]; + u8 reserved_at_49[0xb]; u8 key_size[0x4]; u8 has_keytag[0x1]; u8 reserved_at_59[0x3]; @@ -3747,7 +3748,8 @@ struct mlx5_ifc_dek_bits { u8 opaque[0x40]; u8 reserved_at_1c0[0x40]; u8 key[0x400]; - u8 reserved_at_600[0x200]; + u8 sw_wrapped_dek[0x400]; + u8 reserved_at_a00[0x300]; }; struct mlx5_ifc_create_dek_in_bits { diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 6963d8a9c9..66c9f94346 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -196,7 +196,7 @@ mlx5_crypto_sym_session_configure(struct rte_cryptodev *dev, return -ENOTSUP; } cipher = &xform->cipher; - sess_private_data->dek = mlx5_crypto_dek_prepare(priv, cipher); + sess_private_data->dek = mlx5_crypto_dek_prepare(priv, xform); if (sess_private_data->dek == NULL) { DRV_LOG(ERR, "Failed to prepare dek."); return -ENOMEM; diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 80c2cab0dd..11352f9409 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -40,6 +40,7 @@ struct mlx5_crypto_priv { uint16_t umr_wqe_stride; uint16_t max_rdmar_ds; uint32_t is_wrapped_mode:1; + uint32_t is_gcm_dek_wrap:1; }; struct mlx5_crypto_qp { @@ -78,7 +79,7 @@ mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek * mlx5_crypto_dek_prepare(struct mlx5_crypto_priv *priv, - struct rte_crypto_cipher_xform *cipher); + struct rte_crypto_sym_xform *xform); int mlx5_crypto_dek_setup(struct mlx5_crypto_priv *priv); diff --git a/drivers/crypto/mlx5/mlx5_crypto_dek.c b/drivers/crypto/mlx5/mlx5_crypto_dek.c index 7339ef2bd9..ba6dab52f7 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_dek.c +++ b/drivers/crypto/mlx5/mlx5_crypto_dek.c @@ -14,10 +14,29 @@ #include "mlx5_crypto.h" struct mlx5_crypto_dek_ctx { - struct rte_crypto_cipher_xform *cipher; + struct rte_crypto_sym_xform *xform; struct mlx5_crypto_priv *priv; }; +static int +mlx5_crypto_dek_get_key(struct rte_crypto_sym_xform *xform, + const uint8_t **key, + uint16_t *key_len) +{ + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + *key = xform->cipher.key.data; + *key_len = xform->cipher.key.length; + } else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) { + *key = xform->aead.key.data; + *key_len = xform->aead.key.length; + } else { + DRV_LOG(ERR, "Xform dek type not supported."); + rte_errno = -EINVAL; + return -1; + } + return 0; +} + int mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek *dek) @@ -27,19 +46,22 @@ mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek * mlx5_crypto_dek_prepare(struct mlx5_crypto_priv *priv, - struct rte_crypto_cipher_xform *cipher) + struct rte_crypto_sym_xform *xform) { + const uint8_t *key; + uint16_t key_len; struct mlx5_hlist *dek_hlist = priv->dek_hlist; struct mlx5_crypto_dek_ctx dek_ctx = { - .cipher = cipher, + .xform = xform, .priv = priv, }; - struct rte_crypto_cipher_xform *cipher_ctx = cipher; - uint64_t key64 = __rte_raw_cksum(cipher_ctx->key.data, - cipher_ctx->key.length, 0); - struct mlx5_list_entry *entry = mlx5_hlist_register(dek_hlist, - key64, &dek_ctx); + uint64_t key64; + struct mlx5_list_entry *entry; + if (mlx5_crypto_dek_get_key(xform, &key, &key_len)) + return NULL; + key64 = __rte_raw_cksum(key, key_len, 0); + entry = mlx5_hlist_register(dek_hlist, key64, &dek_ctx); return entry == NULL ? NULL : container_of(entry, struct mlx5_crypto_dek, entry); } @@ -76,76 +98,141 @@ mlx5_crypto_dek_match_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry, void *cb_ctx) { struct mlx5_crypto_dek_ctx *ctx = cb_ctx; - struct rte_crypto_cipher_xform *cipher_ctx = ctx->cipher; + struct rte_crypto_sym_xform *xform = ctx->xform; struct mlx5_crypto_dek *dek = container_of(entry, typeof(*dek), entry); uint32_t key_len = dek->size; + uint16_t xkey_len; + const uint8_t *key; - if (key_len != cipher_ctx->key.length) + if (mlx5_crypto_dek_get_key(xform, &key, &xkey_len)) + return -1; + if (key_len != xkey_len) return -1; - return memcmp(cipher_ctx->key.data, dek->data, cipher_ctx->key.length); + return memcmp(key, dek->data, xkey_len); } -static struct mlx5_list_entry * -mlx5_crypto_dek_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) +static int +mlx5_crypto_dek_create_aes_xts(struct mlx5_crypto_dek *dek, + struct mlx5_devx_dek_attr *dek_attr, + void *cb_ctx) { struct mlx5_crypto_dek_ctx *ctx = cb_ctx; - struct rte_crypto_cipher_xform *cipher_ctx = ctx->cipher; - struct mlx5_crypto_dek *dek = rte_zmalloc(__func__, sizeof(*dek), - RTE_CACHE_LINE_SIZE); - struct mlx5_devx_dek_attr dek_attr = { - .pd = ctx->priv->cdev->pdn, - .key_purpose = MLX5_CRYPTO_KEY_PURPOSE_AES_XTS, - .has_keytag = 1, - }; + struct rte_crypto_cipher_xform *cipher_ctx = &ctx->xform->cipher; bool is_wrapped = ctx->priv->is_wrapped_mode; - if (dek == NULL) { - DRV_LOG(ERR, "Failed to allocate dek memory."); - return NULL; + if (cipher_ctx->algo != RTE_CRYPTO_CIPHER_AES_XTS) { + DRV_LOG(ERR, "Only AES-XTS algo supported."); + return -EINVAL; } + dek_attr->key_purpose = MLX5_CRYPTO_KEY_PURPOSE_AES_XTS; + dek_attr->has_keytag = 1; if (is_wrapped) { switch (cipher_ctx->key.length) { case 48: dek->size = 48; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_128b; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_128b; break; case 80: dek->size = 80; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_256b; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_256b; break; default: DRV_LOG(ERR, "Wrapped key size not supported."); - return NULL; + return -EINVAL; } } else { switch (cipher_ctx->key.length) { case 32: dek->size = 40; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_128b; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_128b; break; case 64: dek->size = 72; - dek_attr.key_size = MLX5_CRYPTO_KEY_SIZE_256b; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_256b; break; default: DRV_LOG(ERR, "Key size not supported."); - return NULL; + return -EINVAL; } - memcpy(&dek_attr.key[cipher_ctx->key.length], + memcpy(&dek_attr->key[cipher_ctx->key.length], &ctx->priv->keytag, 8); } - memcpy(&dek_attr.key, cipher_ctx->key.data, cipher_ctx->key.length); + memcpy(&dek_attr->key, cipher_ctx->key.data, cipher_ctx->key.length); + memcpy(&dek->data, cipher_ctx->key.data, cipher_ctx->key.length); + return 0; +} + +static int +mlx5_crypto_dek_create_aes_gcm(struct mlx5_crypto_dek *dek, + struct mlx5_devx_dek_attr *dek_attr, + void *cb_ctx) +{ + struct mlx5_crypto_dek_ctx *ctx = cb_ctx; + struct rte_crypto_aead_xform *aead_ctx = &ctx->xform->aead; + + if (aead_ctx->algo != RTE_CRYPTO_AEAD_AES_GCM) { + DRV_LOG(ERR, "Only AES-GCM algo supported."); + return -EINVAL; + } + dek_attr->key_purpose = MLX5_CRYPTO_KEY_PURPOSE_GCM; + switch (aead_ctx->key.length) { + case 16: + dek->size = 16; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_128b; + break; + case 32: + dek->size = 32; + dek_attr->key_size = MLX5_CRYPTO_KEY_SIZE_256b; + break; + default: + DRV_LOG(ERR, "Wrapped key size not supported."); + return -EINVAL; + } +#ifdef MLX5_DEK_WRAP + if (ctx->priv->is_gcm_dek_wrap) + dek_attr->sw_wrapped = 1; +#endif + memcpy(&dek_attr->key, aead_ctx->key.data, aead_ctx->key.length); + memcpy(&dek->data, aead_ctx->key.data, aead_ctx->key.length); + return 0; +} + +static struct mlx5_list_entry * +mlx5_crypto_dek_create_cb(void *tool_ctx __rte_unused, void *cb_ctx) +{ + struct mlx5_crypto_dek_ctx *ctx = cb_ctx; + struct rte_crypto_sym_xform *xform = ctx->xform; + struct mlx5_crypto_dek *dek = rte_zmalloc(__func__, sizeof(*dek), + RTE_CACHE_LINE_SIZE); + struct mlx5_devx_dek_attr dek_attr = { + .pd = ctx->priv->cdev->pdn, + }; + int ret = -1; + + if (dek == NULL) { + DRV_LOG(ERR, "Failed to allocate dek memory."); + return NULL; + } + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) + ret = mlx5_crypto_dek_create_aes_xts(dek, &dek_attr, cb_ctx); + else if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) + ret = mlx5_crypto_dek_create_aes_gcm(dek, &dek_attr, cb_ctx); + if (ret) + goto fail; dek->obj = mlx5_devx_cmd_create_dek_obj(ctx->priv->cdev->ctx, &dek_attr); if (dek->obj == NULL) { - rte_free(dek); - return NULL; + DRV_LOG(ERR, "Failed to create dek obj."); + goto fail; } - memcpy(&dek->data, cipher_ctx->key.data, cipher_ctx->key.length); return &dek->entry; +fail: + rte_free(dek); + return NULL; } + static void mlx5_crypto_dek_remove_cb(void *tool_ctx __rte_unused, struct mlx5_list_entry *entry) diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index d60ac379cf..c7fd86d7b9 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -95,6 +95,8 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) return -1; } priv->caps = mlx5_crypto_gcm_caps; + priv->is_gcm_dek_wrap = !!(cdev->config.hca_attr.sw_wrapped_dek & + (1 << MLX5_CRYPTO_KEY_PURPOSE_GCM)); return 0; } From patchwork Tue Apr 18 09:23:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 126230 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5EDE24297B; Tue, 18 Apr 2023 11:24:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CEFD842D13; Tue, 18 Apr 2023 11:24:05 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2080.outbound.protection.outlook.com [40.107.94.80]) by mails.dpdk.org (Postfix) with ESMTP id 94EB642D0E for ; Tue, 18 Apr 2023 11:24:03 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=nZ274HeIQGXI2e56ZRcyUsqzwsChKt3vmLcM2ktocDmku1Ka0BWagmTMXPxBL62mskH6TI40P1kzG/yXQ0s3L9mcJiYynZeGs7e/nZVBYwOxxyael65MYHRiFYRKm+X0GQLu/1q9wYra9lgxN/marzOFZsQcX3bs7GriX4XfNLunzGm3ISDCiagHP0x0nDUEScyfyA5RycL4t8IId60DOdEvcinuyh4Kr0AS7og0QGmu5gNmhOmdfB56Mos7R/q0z1sKinpuPLo5OMg2QyzSL/mp11wsYM5R4g/yik/N7NXqY0Jw1QmutQUD/ET0zGUT/6ObpeueWPbZfKSrxzCxIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ri9x2S+IfjlRJppSDgvTi76s2MAEVKhz8d9ssiKmVCM=; b=UTEQFegXefb8fnktbEQBMP/RH/Lt8f4auyhDUGDrDiRjup/lPjTtaK4gemho+7tgGOLUBQGh6pf2KN+L/tggtU4vHS1GUOoq7KCOGtjS/Y7QDq7GUEPXPd+OV4uJzdGaCjdGFMHtMEYnNavUI7keTMLilP1dHlNeY8SE1sPRuqooAOW2vzSd/g5VaR+NVPreXTRoAhwv58X5tlxm0Z74oJ04NnH8ffhrW0eM8X+XtV7q8hPeV+bYegNKH2pE6cpVgBaC8+xqeXZ0KMwmZQdd5/8ckTI4M+QYKj0fSyremYnVldC4wiJIpA8i/nXmoRwnzIVLxUYV3hKbUENh5QNYig== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ri9x2S+IfjlRJppSDgvTi76s2MAEVKhz8d9ssiKmVCM=; b=J4sehTatYim+Elg+QXufz0tC4Ei3XBm/KwOX6kI0FSuzp9FS9YZjhEHvDIFlRbRgmoc1UvMRW62vWXwhe8ADom8T6WYiXZOw/lyPuVVzevhoeWNbYjbNLA8mgnlRpAcKEJz9LngO01hy7t1yNVxjqwuEIln5T7e8PzFlQRjsD9RkQzg/hnEb2vIDITmmM1k/qdsSr5qm0yg1E17p2miqhp5S1yc4RTqGiRuhA1L/wJCMjJG8IzJkOkul428wGl6YoQrRcY5kpFCQkKv0cJMGmAitGvafPaN2pHhkWex3vnPcM6jtSgHOvtdgmWtptZiZhPWrK4vgrUrG9v+CCe1ilw== Received: from MW4PR04CA0256.namprd04.prod.outlook.com (2603:10b6:303:88::21) by DM4PR12MB5867.namprd12.prod.outlook.com (2603:10b6:8:66::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 09:24:01 +0000 Received: from CO1NAM11FT104.eop-nam11.prod.protection.outlook.com (2603:10b6:303:88:cafe::d7) by MW4PR04CA0256.outlook.office365.com (2603:10b6:303:88::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 09:24:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1NAM11FT104.mail.protection.outlook.com (10.13.174.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 09:24:01 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 02:23:52 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 02:23:50 -0700 From: Suanming Mou To: CC: , , Subject: [RFC PATCH 3/5] crypto/mlx5: add AES-GCM session configure Date: Tue, 18 Apr 2023 12:23:23 +0300 Message-ID: <20230418092325.2578712-4-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230418092325.2578712-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1NAM11FT104:EE_|DM4PR12MB5867:EE_ X-MS-Office365-Filtering-Correlation-Id: 0d68cf0e-8689-4eda-061d-08db3feea566 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Ln7hI/CySQtyVqYbTrR1v9IElbWI/64Tr+6YtxpCISc5fNEsbPliakDGcL4wriimSieeNjsjGjHCd0ChxXiZL2QFFoBZBikS4vKrKU2tBIPhDyx8amKdy6B78RAC6uN09SPRZ0vTxHbyAAvPExtkU5TNft62UPamataid/I9eo5wmD8BWILgtrTnh54ELZDb3q3DS/+J4W2VmWDrsO53lYPootNI1RRe832a+A5UkMm5vtIV8BReqUzqwkIEAhF0g8bKQY2VH+mtDS4YkFPOAT22lYRx+9DTxjKnq1YXaJIKK+41BnNSRrsqj+QhHeGp/Fbip164iIELXuYysPL/vHJxsrtI/YJfK2PH+U3vj4YY4sXYq8FR8X0/tf0zqQhvrY9ZT6ifuUkhJ/1zpjlHac2smas1wY36RLK+98h+ELmQlJtGu2eYc+oAthWEI/5QLOutGOpssHSGD6rctZvLxUJsMGikEMuPE/m+RUkHJlsGef2JmbN/J5wzQOw488euNNcKyDyrN0gBPR41JWhcnX8mgEOFC9SN9RWEqDV0v02uNgKHsE8K4jp7I0xX38lVkMCU+bellJ79zXTp6le+TMRwbUEM/ZdPVIDJktr0MtPSFgIFXPkzfwkb9zKgc4M5jbF7Oyz4Aw3qeOjxAbq69pOiY8RJHrNj2DkPXOeEL58swUhiY/emLVBBTVmXL37h78Ww29wMfNdNrcCh2o4EmhEo3LmBd5+CfvPtb5s3ksVd4fdPac4oKSAimxwthZOshfEjc6GWtpeH7E3cpAe5tl1j4Cx8Zyx3QX26r7zWolo= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(39860400002)(396003)(346002)(376002)(451199021)(46966006)(40470700004)(36840700001)(40480700001)(55016003)(82310400005)(4326008)(2616005)(40460700003)(16526019)(6286002)(186003)(41300700001)(36860700001)(426003)(336012)(47076005)(83380400001)(7696005)(6666004)(6636002)(37006003)(316002)(54906003)(478600001)(1076003)(26005)(70206006)(70586007)(7636003)(356005)(82740400003)(86362001)(36756003)(6862004)(8676002)(8936002)(34020700004)(2906002)(5660300002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:24:01.1812 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0d68cf0e-8689-4eda-061d-08db3feea566 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT104.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5867 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sessions are used in symmetric transformations in order to prepare objects and data for packet processing stage. The AES-GCM session includes IV, AAD, digest(tag), DEK, operation mode information. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_prm.h | 12 +++++++ drivers/crypto/mlx5/mlx5_crypto.c | 15 --------- drivers/crypto/mlx5/mlx5_crypto.h | 35 ++++++++++++++++++++ drivers/crypto/mlx5/mlx5_crypto_gcm.c | 46 +++++++++++++++++++++++++++ 4 files changed, 93 insertions(+), 15 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9728be24dd..25ff66ee7e 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -528,11 +528,23 @@ enum { MLX5_BLOCK_SIZE_4048B = 0x6, }; +enum { + MLX5_ENCRYPTION_TYPE_AES_GCM = 0x3, +}; + +enum { + MLX5_CRYPTO_OP_TYPE_ENCRYPTION = 0x0, + MLX5_CRYPTO_OP_TYPE_DECRYPTION = 0x1, +}; + #define MLX5_BSF_SIZE_OFFSET 30 #define MLX5_BSF_P_TYPE_OFFSET 24 #define MLX5_ENCRYPTION_ORDER_OFFSET 16 #define MLX5_BLOCK_SIZE_OFFSET 24 +#define MLX5_CRYPTO_MMO_TYPE_OFFSET 24 +#define MLX5_CRYPTO_MMO_OP_OFFSET 20 + struct mlx5_wqe_umr_bsf_seg { /* * bs_bpt_eo_es contains: diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 66c9f94346..8946f13e5e 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -83,21 +83,6 @@ static const struct rte_driver mlx5_drv = { static struct cryptodev_driver mlx5_cryptodev_driver; -struct mlx5_crypto_session { - uint32_t bs_bpt_eo_es; - /**< bsf_size, bsf_p_type, encryption_order and encryption standard, - * saved in big endian format. - */ - uint32_t bsp_res; - /**< crypto_block_size_pointer and reserved 24 bits saved in big - * endian format. - */ - uint32_t iv_offset:16; - /**< Starting point for Initialisation Vector. */ - struct mlx5_crypto_dek *dek; /**< Pointer to dek struct. */ - uint32_t dek_id; /**< DEK ID */ -} __rte_packed; - static void mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev, struct rte_cryptodev_info *dev_info) diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 11352f9409..c34a860404 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -73,6 +73,41 @@ struct mlx5_crypto_devarg_params { uint32_t is_aes_gcm:1; }; +struct mlx5_crypto_session { + union { + /**< AES-XTS configuration. */ + struct { + uint32_t bs_bpt_eo_es; + /**< bsf_size, bsf_p_type, encryption_order and encryption standard, + * saved in big endian format. + */ + uint32_t bsp_res; + /**< crypto_block_size_pointer and reserved 24 bits saved in big + * endian format. + */ + }; + /**< AES-GCM configuration. */ + struct { + uint32_t mmo_ctrl; + /**< Crypto control fields with algo type and op type in big + * endian format. + */ + uint16_t tag_len; + /**< AES-GCM crypto digest size in bytes. */ + uint16_t aad_len; + /**< The length of the additional authenticated data (AAD) in bytes. */ + uint32_t op_type; + /**< Operation type. */ + }; + }; + uint32_t iv_offset:16; + /**< Starting point for Initialisation Vector. */ + uint32_t iv_len; + /**< Initialisation Vector length. */ + struct mlx5_crypto_dek *dek; /**< Pointer to dek struct. */ + uint32_t dek_id; /**< DEK ID */ +} __rte_packed; + int mlx5_crypto_dek_destroy(struct mlx5_crypto_priv *priv, struct mlx5_crypto_dek *dek); diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index c7fd86d7b9..6c2c759fba 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -81,12 +81,58 @@ mlx5_crypto_generate_gcm_cap(struct mlx5_hca_crypto_mmo_attr *mmo_attr, return 0; } +static int +mlx5_crypto_sym_gcm_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_sym_xform *xform, + struct rte_cryptodev_sym_session *session) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_crypto_session *sess_private_data = CRYPTODEV_GET_SYM_SESS_PRIV(session); + struct rte_crypto_aead_xform *aead = &xform->aead; + uint32_t op_type; + + if (unlikely(xform->next != NULL)) { + DRV_LOG(ERR, "Xform next is not supported."); + return -ENOTSUP; + } + if (aead->algo != RTE_CRYPTO_AEAD_AES_GCM) { + DRV_LOG(ERR, "Only AES-GCM algorithm is supported."); + return -ENOTSUP; + } + if (aead->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) + op_type = MLX5_CRYPTO_OP_TYPE_ENCRYPTION; + else + op_type = MLX5_CRYPTO_OP_TYPE_DECRYPTION; + sess_private_data->op_type = op_type; + sess_private_data->mmo_ctrl = rte_cpu_to_be_32 + (op_type << MLX5_CRYPTO_MMO_OP_OFFSET | + MLX5_ENCRYPTION_TYPE_AES_GCM << MLX5_CRYPTO_MMO_TYPE_OFFSET); + sess_private_data->aad_len = aead->aad_length; + sess_private_data->tag_len = aead->digest_length; + sess_private_data->iv_offset = aead->iv.offset; + sess_private_data->iv_len = aead->iv.length; + sess_private_data->dek = mlx5_crypto_dek_prepare(priv, xform); + if (sess_private_data->dek == NULL) { + DRV_LOG(ERR, "Failed to prepare dek."); + return -ENOMEM; + } + sess_private_data->dek_id = + rte_cpu_to_be_32(sess_private_data->dek->obj->id & + 0xffffff); + DRV_LOG(DEBUG, "Session %p was configured.", sess_private_data); + return 0; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { struct mlx5_common_device *cdev = priv->cdev; + struct rte_cryptodev *crypto_dev = priv->crypto_dev; + struct rte_cryptodev_ops *dev_ops = crypto_dev->dev_ops; int ret; + /* Override AES-GCM specified ops. */ + dev_ops->sym_session_configure = mlx5_crypto_sym_gcm_session_configure; /* Generate GCM capability. */ ret = mlx5_crypto_generate_gcm_cap(&cdev->config.hca_attr.crypto_mmo, mlx5_crypto_gcm_caps); From patchwork Tue Apr 18 09:23:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 126231 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32B524297B; Tue, 18 Apr 2023 11:24:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5C8E742D29; Tue, 18 Apr 2023 11:24:11 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by mails.dpdk.org (Postfix) with ESMTP id 9174641149 for ; Tue, 18 Apr 2023 11:24:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kmfV2EXIEiK+2j7fUbPYJz2ML5Z355Vyo+lNStKlGaekUzcqswpYwMaz3oYMmYYEZJVo1yVr+oL3Y/68SLtaEOu4QoyjCPcVIAaQTzg1mj32zZFv1X/Frfn8hog5XbFnT/j2mfFK/xVcI9CK1mOTwXIu9EoHjcYBzmQziV4rr3v9lD3+mGtehoaDCEkbJds5HwHUsMa/pXuc/TWLFIfnLm3QK90cDtVO0BD0bMPDQfqmwy9eliqgEfOSPXSmD4LNSKw5w0SPBY7HCWClXdeAVDgsuBrkeOPvWL2sL9bLzebQcgJn3zvllTCb6qlYv7wxIii9p9SMFZgJgdaHVRgiLA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XRimGayaLrVWmTG4H7CoTKuAceHllEJTNf+7QsPmQTQ=; b=SBlieVKWzrNF2R5M/ACasT1sOZ/GbW3kMHjNTBt/Qemkmj970XlcmfaCLEmg0LqL81IySWO1frRW+cHoZV2MOgoYsN2lj3K8GNeyP3D6Q0P2sL4O391K1BFf7UHG0Bz2DH07gIpHf0kuoEsqVCZv7Vv8dH4CdSl4tj0hd8ISdJdM32VEwGo7+zDbUGN91JXOvNmsajrlgRtcU2E2xM0/qQnWcWOvbsNwarYKh8uZBWzCi35j1f6XRaUfb5JY8zfyOB8CBIgFBaShvOW+AlhVfY+60pZ6MhS/A1DH36NmhFYVQPAS6Ntby206cBe3FQrz0urcU9uHlCotwyA+6JUD0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XRimGayaLrVWmTG4H7CoTKuAceHllEJTNf+7QsPmQTQ=; b=E5tOEfNBDtpyRQi7MAFNVv5cwvOa+vIJM8MVZ2H1DnVZSHbjxG3LV9G36T1LLU6TAsWAMrZK8q9lQs84SzaJZYqk7o1GshSZh77STONrm3bM3a/vSMKp6Bhml0i/vMyPyOHWhcXQMZjcqmUCLMKpJwUJNtlMEe7/WNdcaeSTx1DmhbhIFzaR9Bpr4bZ0/QRkXmmtttCpmkBSP785gqIwraiNvHCuVChkY3DTBEKW+SZLQcFnLbprf4o3TdCkNYdWBR0+ZyaUQHJMxOONeHx/KSu+EknbGyNBnDNJxZCo2S7tIyvaDki+pwoQwVC1GyE5nNLd5HH+sPj/ysX1qMVmQw== Received: from BN9PR03CA0240.namprd03.prod.outlook.com (2603:10b6:408:f8::35) by BL1PR12MB5125.namprd12.prod.outlook.com (2603:10b6:208:309::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 09:24:07 +0000 Received: from BN8NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f8:cafe::fd) by BN9PR03CA0240.outlook.office365.com (2603:10b6:408:f8::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.46 via Frontend Transport; Tue, 18 Apr 2023 09:24:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT039.mail.protection.outlook.com (10.13.177.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 09:24:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 02:23:55 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 02:23:53 -0700 From: Suanming Mou To: CC: , , Subject: [RFC PATCH 4/5] crypto/mlx5: add queue pair setup Date: Tue, 18 Apr 2023 12:23:24 +0300 Message-ID: <20230418092325.2578712-5-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230418092325.2578712-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT039:EE_|BL1PR12MB5125:EE_ X-MS-Office365-Filtering-Correlation-Id: fb6e19e0-d8a5-4ffb-9ebf-08db3feea92f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AY5ahq5/J2XX9kuAx9DQ8lg7XrVPW653TIBZPnHKjzldFdSXFkRl26ToIBhO1uCo+Ikm0NIL+ZW6/G2lir+bsMekSK0SpOlUSQWWy5w+8iMaWMPSGK/rHvIHSFBGG6pRwdUXBpmf3FYPZXjy1iqF1780TwkZalexGpLk15O1Vkp/oPpLirlN8C8xD0IMJsB7NqfHyuMXv0A1QATqZPYTdgrY67U3dIS04Nwr8fnai77ImISDxchIR2WJGXUKFs7C/rmv5IYFoQDDfJtdAyp1ZNcVxmEStXtYPh4CCB7czMGhQECiXxkPGjNMgmSzGaVFiDYeoXHA5M8Q+bxon2ZJLv2lzPGSs8tR4GWlh1u6BbSbl3Vt1ZLmGG+YvEWEJop56XUkcnfKpqNgebuIkLGKnMZjI3BYTgEvFcNQAORwwLsPZt0ufRBabfzEbDTMXgeqScyRNOPOW7UmIQhsIyXDCwp/vm4MtogtLTGT6tmxWKxIeTfq5r9M3sUUbZIyil5aYlyXIm1/DqL/FKL/My/6NQtFOv1HXKYrr38xZh4VOK33wf+sQc4wk4sXzjxJ8YGVuV8P3PUbiCwQUnPpMXy+ly+1R0j2xaBtBXfpy5FSL76rA0HaPS4Ka8HTROPImuaik1+woCKLvVJjqc7I9k9sVq9EYx53IXbUizn+0hhjQDho2cNmtKyegp5X5gd6h0xLGXCaE97UG1jHvf1sHpIrxxJWGBz9uKWZL45OLUNOTQZUTgBJkWCIgW6+qds0Ma+OUO2hVb4l2bVaFS3kbgoTVkaQ+o+CmJt8ez8rk3vzd34= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(376002)(346002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(36756003)(5660300002)(30864003)(2906002)(6862004)(8936002)(40480700001)(55016003)(86362001)(70206006)(70586007)(4326008)(82740400003)(41300700001)(82310400005)(356005)(8676002)(47076005)(336012)(34020700004)(426003)(36860700001)(54906003)(6286002)(1076003)(26005)(37006003)(16526019)(478600001)(316002)(40460700003)(6636002)(6666004)(186003)(7636003)(7696005)(2616005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:24:07.4262 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fb6e19e0-d8a5-4ffb-9ebf-08db3feea92f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5125 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto queue pair is for handling the encryption/decryption operations. As AES-GCM AEAD API provides AAD, mbuf, digest separately, low-level FW only accepts the data in a single contiguous memory region, two internal QPs are created for AES-GCM queue pair. One for organizing the memory to be contiguous if they are not. The other is for crypto. If the buffers are checked as implicitly contiguous, the buffer will be sent to the crypto QP directly for encryption/decryption. If not, the buffers will be handled by the first UMR QP. The UMR QP will convert the buffers to be contiguous one. Then the well organized "new" buffer can be handled by crypto QP. The crypto QP is initialized as follower, and UMR as leader. Once crypto operation input buffer requires memory address space converting by UMR QP, the crypto QP processing will be triggered by UMR QP. Otherwise, the ring crypto QP doorbell directly. The existing max_segs_num devarg is used for define how many segments the chained mbuf contains same as AES-XTS before. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_devx_cmds.c | 6 + drivers/common/mlx5/mlx5_devx_cmds.h | 3 + drivers/common/mlx5/mlx5_prm.h | 24 +++ drivers/crypto/mlx5/mlx5_crypto.c | 17 ++ drivers/crypto/mlx5/mlx5_crypto.h | 12 ++ drivers/crypto/mlx5/mlx5_crypto_gcm.c | 254 ++++++++++++++++++++++++++ 6 files changed, 316 insertions(+) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 8b51a75cc8..6be02c0a65 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -2563,6 +2563,12 @@ mlx5_devx_cmd_create_qp(void *ctx, attr->dbr_umem_valid); MLX5_SET(qpc, qpc, dbr_umem_id, attr->dbr_umem_id); } + if (attr->cd_master) + MLX5_SET(qpc, qpc, cd_master, attr->cd_master); + if (attr->cd_slave_send) + MLX5_SET(qpc, qpc, cd_slave_send, attr->cd_slave_send); + if (attr->cd_slave_recv) + MLX5_SET(qpc, qpc, cd_slave_receive, attr->cd_slave_recv); MLX5_SET64(qpc, qpc, dbr_addr, attr->dbr_address); MLX5_SET64(create_qp_in, in, wq_umem_offset, attr->wq_umem_offset); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 79502cda08..e68aa077d7 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -590,6 +590,9 @@ struct mlx5_devx_qp_attr { uint64_t wq_umem_offset; uint32_t user_index:24; uint32_t mmo:1; + uint32_t cd_master:1; + uint32_t cd_slave_send:1; + uint32_t cd_slave_recv:1; }; struct mlx5_devx_virtio_q_couners_attr { diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 25ff66ee7e..c8d73a8456 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -594,6 +594,17 @@ struct mlx5_rdma_write_wqe { struct mlx5_wqe_dseg dseg[]; } __rte_packed; +struct mlx5_wqe_send_en_seg { + uint32_t reserve[2]; + uint32_t sqnpc; + uint32_t qpn; +} __rte_packed; + +struct mlx5_wqe_send_en_wqe { + struct mlx5_wqe_cseg ctr; + struct mlx5_wqe_send_en_seg sseg; +} __rte_packed; + #ifdef PEDANTIC #pragma GCC diagnostic error "-Wpedantic" #endif @@ -668,6 +679,19 @@ union mlx5_gga_compress_opaque { uint32_t data[64]; }; +union mlx5_gga_crypto_opaque { + struct { + uint32_t syndrome; + uint32_t reserved0[2]; + struct { + uint32_t iv[3]; + uint32_t tag_size; + uint32_t aad_size; + } cp __rte_packed; + } __rte_packed; + uint8_t data[64]; +}; + struct mlx5_ifc_regexp_mmo_control_bits { uint8_t reserved_at_31[0x2]; uint8_t le[0x1]; diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 8946f13e5e..f2e5b25c15 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -849,12 +849,27 @@ mlx5_crypto_max_segs_num(uint16_t max_wqe_size) return max_segs_cap; } +static __rte_always_inline int +mlx5_crypto_configure_gcm_wqe_size(struct mlx5_crypto_priv *priv) +{ + uint32_t send_en_wqe_size; + + priv->umr_wqe_size = RTE_ALIGN(sizeof(struct mlx5_umr_wqe) + sizeof(struct mlx5_wqe_dseg), + MLX5_SEND_WQE_BB); + send_en_wqe_size = RTE_ALIGN(sizeof(struct mlx5_wqe_send_en_wqe), MLX5_SEND_WQE_BB); + priv->umr_wqe_stride = priv->umr_wqe_size / MLX5_SEND_WQE_BB; + priv->wqe_set_size = priv->umr_wqe_size + send_en_wqe_size; + return 0; +} + static int mlx5_crypto_configure_wqe_size(struct mlx5_crypto_priv *priv, uint16_t max_wqe_size, uint32_t max_segs_num) { uint32_t rdmw_wqe_size, umr_wqe_size; + if (priv->is_gcm_dek_wrap) + return mlx5_crypto_configure_gcm_wqe_size(priv); mlx5_crypto_get_wqe_sizes(max_segs_num, &umr_wqe_size, &rdmw_wqe_size); priv->wqe_set_size = rdmw_wqe_size + umr_wqe_size; @@ -927,12 +942,14 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev, priv->cdev = cdev; priv->crypto_dev = crypto_dev; priv->is_wrapped_mode = wrapped_mode; + priv->max_segs_num = devarg_prms.max_segs_num; priv->caps = mlx5_crypto_caps; /* Init and override AES-GCM configuration. */ if (devarg_prms.is_aes_gcm) { ret = mlx5_crypto_gcm_init(priv); if (ret) { DRV_LOG(ERR, "Failed to init AES-GCM crypto."); + return -ENOTSUP; } } if (mlx5_devx_uar_prepare(cdev, &priv->uar) != 0) { diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index c34a860404..9945891ea8 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -47,15 +47,27 @@ struct mlx5_crypto_qp { struct mlx5_crypto_priv *priv; struct mlx5_devx_cq cq_obj; struct mlx5_devx_qp qp_obj; + struct mlx5_devx_cq umr_cq_obj; + struct mlx5_devx_qp umr_qp_obj; struct rte_cryptodev_stats stats; struct rte_crypto_op **ops; struct mlx5_devx_obj **mkey; /* WQE's indirect mekys. */ + struct mlx5_klm *klm_array; struct mlx5_mr_ctrl mr_ctrl; + struct mlx5_pmd_mr opaque_mr; + struct mlx5_pmd_mr klm_mr; + /* Crypto QP. */ uint8_t *wqe; uint16_t entries_n; uint16_t pi; uint16_t ci; uint16_t db_pi; + /* UMR QP. */ + uint8_t *umr_wqe; + uint16_t umr_wqbbs; + uint16_t umr_pi; + uint16_t umr_ci; + uint32_t umr_errors; }; struct mlx5_crypto_dek { diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index 6c2c759fba..b67f22c591 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -123,6 +123,257 @@ mlx5_crypto_sym_gcm_session_configure(struct rte_cryptodev *dev, return 0; } +static void +mlx5_crypto_gcm_indirect_mkeys_release(struct mlx5_crypto_qp *qp, uint16_t n) +{ + uint16_t i; + + for (i = 0; i < n; i++) + if (qp->mkey[i]) + claim_zero(mlx5_devx_cmd_destroy(qp->mkey[i])); +} + +static int +mlx5_crypto_gcm_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, + struct mlx5_crypto_qp *qp) +{ + uint32_t i; + struct mlx5_devx_mkey_attr attr = { + .pd = priv->cdev->pdn, + .umr_en = 1, + .set_remote_rw = 1, + .klm_num = priv->max_segs_num, + }; + + for (i = 0; i < qp->entries_n; i++) { + attr.klm_array = (struct mlx5_klm *)&qp->klm_array[i * priv->max_segs_num]; + qp->mkey[i] = mlx5_devx_cmd_mkey_create(priv->cdev->ctx, &attr); + if (!qp->mkey[i]) + goto error; + } + return 0; +error: + DRV_LOG(ERR, "Failed to allocate gcm indirect mkey."); + mlx5_crypto_gcm_indirect_mkeys_release(qp, i); + return -1; +} + +static int +mlx5_crypto_gcm_qp_release(struct rte_cryptodev *dev, uint16_t qp_id) +{ + struct mlx5_crypto_qp *qp = dev->data->queue_pairs[qp_id]; + + if (qp->umr_qp_obj.qp != NULL) + mlx5_devx_qp_destroy(&qp->umr_qp_obj); + if (qp->umr_cq_obj.cq != NULL) + mlx5_devx_cq_destroy(&qp->umr_cq_obj); + if (qp->qp_obj.qp != NULL) + mlx5_devx_qp_destroy(&qp->qp_obj); + if (qp->cq_obj.cq != NULL) + mlx5_devx_cq_destroy(&qp->cq_obj); + if (qp->opaque_mr.obj != NULL) { + void *opaq = qp->opaque_mr.addr; + + mlx5_common_verbs_dereg_mr(&qp->opaque_mr); + rte_free(opaq); + } + mlx5_crypto_gcm_indirect_mkeys_release(qp, qp->entries_n); + if (qp->klm_mr.obj != NULL) { + void *klm = qp->klm_mr.addr; + + mlx5_common_verbs_dereg_mr(&qp->klm_mr); + rte_free(klm); + } + mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); + rte_free(qp); + dev->data->queue_pairs[qp_id] = NULL; + return 0; +} + +static void +mlx5_crypto_gcm_init_qp(struct mlx5_crypto_qp *qp) +{ + volatile struct mlx5_gga_wqe *restrict wqe = + (volatile struct mlx5_gga_wqe *)qp->qp_obj.wqes; + volatile union mlx5_gga_crypto_opaque *opaq = qp->opaque_mr.addr; + const uint32_t sq_ds = rte_cpu_to_be_32((qp->qp_obj.qp->id << 8) | 4u); + const uint32_t flags = RTE_BE32(MLX5_COMP_ALWAYS << + MLX5_COMP_MODE_OFFSET); + const uint32_t opaq_lkey = rte_cpu_to_be_32(qp->opaque_mr.lkey); + int i; + + /* All the next fields state should stay constant. */ + for (i = 0; i < qp->entries_n; ++i, ++wqe) { + wqe->sq_ds = sq_ds; + wqe->flags = flags; + wqe->opaque_lkey = opaq_lkey; + wqe->opaque_vaddr = rte_cpu_to_be_64((uint64_t)(uintptr_t)&opaq[i]); + } +} + +static inline int +mlx5_crypto_gcm_umr_qp_setup(struct rte_cryptodev *dev, struct mlx5_crypto_qp *qp, + uint16_t log_nb_desc, int socket_id) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_devx_qp_attr attr = {0}; + uint32_t ret; + uint32_t log_wqbb_n; + struct mlx5_devx_cq_attr cq_attr = { + .use_first_only = 1, + .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), + }; + size_t klm_size = priv->max_segs_num * sizeof(struct mlx5_klm); + void *klm_array; + + klm_array = rte_calloc(__func__, (size_t)qp->entries_n, klm_size, 64); + if (klm_array == NULL) { + DRV_LOG(ERR, "Failed to allocate opaque memory."); + rte_errno = ENOMEM; + return -1; + } + if (mlx5_common_verbs_reg_mr(priv->cdev->pd, klm_array, + qp->entries_n * klm_size, + &qp->klm_mr) != 0) { + rte_free(klm_array); + DRV_LOG(ERR, "Failed to register klm MR."); + rte_errno = ENOMEM; + return -1; + } + qp->klm_array = (struct mlx5_klm *)qp->klm_mr.addr; + if (mlx5_devx_cq_create(priv->cdev->ctx, &qp->umr_cq_obj, log_nb_desc, + &cq_attr, socket_id) != 0) { + DRV_LOG(ERR, "Failed to create UMR CQ."); + return -1; + } + /* Set UMR + SEND_EN WQE as maximum same with crypto. */ + log_wqbb_n = rte_log2_u32(qp->entries_n * + (priv->wqe_set_size / MLX5_SEND_WQE_BB)); + attr.pd = priv->cdev->pdn; + attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj); + attr.cqn = qp->umr_cq_obj.cq->id; + attr.num_of_receive_wqes = 0; + attr.num_of_send_wqbbs = RTE_BIT32(log_wqbb_n); + attr.ts_format = + mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); + attr.cd_master = 1; + ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->umr_qp_obj, + attr.num_of_send_wqbbs * MLX5_SEND_WQE_BB, + &attr, socket_id); + if (ret) { + DRV_LOG(ERR, "Failed to create UMR QP."); + return -1; + } + if (mlx5_devx_qp2rts(&qp->umr_qp_obj, qp->umr_qp_obj.qp->id)) { + DRV_LOG(ERR, "Failed to change UMR QP state to RTS."); + return -1; + } + /* Save the UMR WQEBBS for checking the WQE boundary. */ + qp->umr_wqbbs = attr.num_of_send_wqbbs; + return 0; +} + +static int +mlx5_crypto_gcm_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct mlx5_crypto_priv *priv = dev->data->dev_private; + struct mlx5_hca_attr *attr = &priv->cdev->config.hca_attr; + struct mlx5_crypto_qp *qp; + struct mlx5_devx_cq_attr cq_attr = { + .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), + }; + struct mlx5_devx_qp_attr qp_attr = { + .pd = priv->cdev->pdn, + .uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj), + .user_index = qp_id, + }; + uint32_t log_ops_n = rte_log2_u32(qp_conf->nb_descriptors); + uint32_t entries = RTE_BIT32(log_ops_n); + uint32_t alloc_size = sizeof(*qp); + void *opaq_buf; + int ret; + + alloc_size = RTE_ALIGN(alloc_size, RTE_CACHE_LINE_SIZE); + alloc_size += (sizeof(struct rte_crypto_op *) + + sizeof(struct mlx5_devx_obj *)) * entries; + qp = rte_zmalloc_socket(__func__, alloc_size, RTE_CACHE_LINE_SIZE, + socket_id); + if (qp == NULL) { + DRV_LOG(ERR, "Failed to allocate qp memory."); + rte_errno = ENOMEM; + return -rte_errno; + } + qp->priv = priv; + qp->entries_n = entries; + if (mlx5_mr_ctrl_init(&qp->mr_ctrl, &priv->cdev->mr_scache.dev_gen, + priv->dev_config.socket_id)) { + DRV_LOG(ERR, "Cannot allocate MR Btree for qp %u.", + (uint32_t)qp_id); + rte_errno = ENOMEM; + goto err; + } + opaq_buf = rte_calloc(__func__, (size_t)entries, + sizeof(union mlx5_gga_crypto_opaque), + sizeof(union mlx5_gga_crypto_opaque)); + if (opaq_buf == NULL) { + DRV_LOG(ERR, "Failed to allocate opaque memory."); + rte_errno = ENOMEM; + goto err; + } + if (mlx5_common_verbs_reg_mr(priv->cdev->pd, opaq_buf, entries * + sizeof(union mlx5_gga_crypto_opaque), + &qp->opaque_mr) != 0) { + rte_free(opaq_buf); + DRV_LOG(ERR, "Failed to register opaque MR."); + rte_errno = ENOMEM; + goto err; + } + ret = mlx5_devx_cq_create(priv->cdev->ctx, &qp->cq_obj, log_ops_n, + &cq_attr, socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create CQ."); + goto err; + } + qp_attr.cqn = qp->cq_obj.cq->id; + qp_attr.ts_format = mlx5_ts_format_conv(attr->qp_ts_format); + qp_attr.num_of_receive_wqes = 0; + qp_attr.num_of_send_wqbbs = entries; + qp_attr.mmo = attr->crypto_mmo.crypto_mmo_qp; + /* Set MMO QP as follower as the input data may depend on UMR. */ + qp_attr.cd_slave_send = 1; + ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, + qp_attr.num_of_send_wqbbs * MLX5_WQE_SIZE, + &qp_attr, socket_id); + if (ret != 0) { + DRV_LOG(ERR, "Failed to create QP."); + goto err; + } + mlx5_crypto_gcm_init_qp(qp); + ret = mlx5_devx_qp2rts(&qp->qp_obj, 0); + if (ret) + goto err; + qp->ops = (struct rte_crypto_op **)(qp + 1); + qp->mkey = (struct mlx5_devx_obj **)(qp->ops + entries); + if (mlx5_crypto_gcm_umr_qp_setup(dev, qp, log_ops_n, socket_id)) { + DRV_LOG(ERR, "Failed to setup UMR QP."); + goto err; + } + DRV_LOG(INFO, "QP %u: SQN=0x%X CQN=0x%X entries num = %u", + (uint32_t)qp_id, qp->qp_obj.qp->id, qp->cq_obj.cq->id, entries); + if (mlx5_crypto_gcm_indirect_mkeys_prepare(priv, qp)) { + DRV_LOG(ERR, "Cannot allocate indirect memory regions."); + rte_errno = ENOMEM; + goto err; + } + dev->data->queue_pairs[qp_id] = qp; + return 0; +err: + mlx5_crypto_gcm_qp_release(dev, qp_id); + return -1; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { @@ -133,6 +384,8 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) /* Override AES-GCM specified ops. */ dev_ops->sym_session_configure = mlx5_crypto_sym_gcm_session_configure; + dev_ops->queue_pair_setup = mlx5_crypto_gcm_qp_setup; + dev_ops->queue_pair_release = mlx5_crypto_gcm_qp_release; /* Generate GCM capability. */ ret = mlx5_crypto_generate_gcm_cap(&cdev->config.hca_attr.crypto_mmo, mlx5_crypto_gcm_caps); @@ -140,6 +393,7 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) DRV_LOG(ERR, "No enough AES-GCM cap."); return -1; } + priv->max_segs_num = rte_align32pow2((priv->max_segs_num + 2) * 2); priv->caps = mlx5_crypto_gcm_caps; priv->is_gcm_dek_wrap = !!(cdev->config.hca_attr.sw_wrapped_dek & (1 << MLX5_CRYPTO_KEY_PURPOSE_GCM)); From patchwork Tue Apr 18 09:23:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 126232 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C747F4297B; Tue, 18 Apr 2023 11:24:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F1FED42D38; Tue, 18 Apr 2023 11:24:13 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam04on2051.outbound.protection.outlook.com [40.107.101.51]) by mails.dpdk.org (Postfix) with ESMTP id B280B42D38 for ; Tue, 18 Apr 2023 11:24:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d5Y6StsMyKDA7dtgOQv9olCgaCQLtliudZBw4LaV8BWb5Plgg90AJm0VQ3DaVsc13VRj4IcaJGgxgB5UFyz99uPG6FofuzCJHAiGSCsftcyl9N1Y8eHJzfiUaME4maY0vWtdkmRnNMcpy9NnGLVMI8626sY4tPpL/Z6V9nIYbQDWi5+7/FM+Ty8ZWY29C07PqupmJ9zl6WpV0jS/qDCGS+Bw0EY3ydT0Nk5cuXaLMA1II9dICK8EBgu94RSLdBfpe2Hk7Qjh6ZwUYVlNmGFjcliE4BVsQIml0m2L0u99D2haPtZ8fnvOSZhedQzTsHCipXwCdkE2/lmGMW8L2tVtmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=NVQg8OVAyOXYkpQZDQiRO/piTOT7EWQR2UETbgFMmTA=; b=VSZpdVb6hOL3YJJ0tbIDmg8cuWaArmuJ9h5rmDQYWuKMVPGaVIe1DCMCXK2uAjgO7u9Uy33JDdBr2hNubs2NUQSa/WSf9DXlSQIG6MmRmle/Oa+5GvmkafmRURlHeYilv6h/FuXDAO2o5P1EvKaLcoSgCFJXVGGZN59pzZ6QurqRUh4jbIuQbqeuFRLWrgLAuJmkBl4s8bL+MlmFa5fyp0SQwA+Tn7Ni1oR1j+IpwbSODiPPxHZmRQRA9hINNmvqGiwKFKMgVuUb6X2OHUnVL5EZkFlQ2fEGQrg4/Uy5lvNNvRsD57EOhgCdmqZh3HzCYyCxPqRl2iMLe4qRmCIw7Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=NVQg8OVAyOXYkpQZDQiRO/piTOT7EWQR2UETbgFMmTA=; b=VHiYTnBQSWWQkfYPUiiPngEEhWjsun8y9Fv92wAPXy2OuPCt3GRY45F1yTEGVF63jlNYjRExDCEc3bPNmLWMgGTDaL8+32sUjI6cbp/2jlUGLjcdpVxEBQ2fiNSyxH/HW1AW1116fjWlupy/1447RQPSfNXETbl53RlCYOwZ6QD0TVg/jr3fVc5Txq4yzuh+ijqHsJwFBLySkc7mS6AmrMeE4p/OC4PTujIeOf2BAsl+0t9iY5mZPc5Nv1IZ/ex24i2mPqWO9rYktovGJFxa+HcyOh8SeRh/fLuFIvbzWVRC1Q7FX9FIuDE6tAHQ6lMQeFvWJtjvi5wEBeqqE9C4ZQ== Received: from BN9PR03CA0212.namprd03.prod.outlook.com (2603:10b6:408:f8::7) by IA1PR12MB7542.namprd12.prod.outlook.com (2603:10b6:208:42e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.45; Tue, 18 Apr 2023 09:24:10 +0000 Received: from BN8NAM11FT039.eop-nam11.prod.protection.outlook.com (2603:10b6:408:f8:cafe::ec) by BN9PR03CA0212.outlook.office365.com (2603:10b6:408:f8::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6298.47 via Frontend Transport; Tue, 18 Apr 2023 09:24:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT039.mail.protection.outlook.com (10.13.177.169) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6319.20 via Frontend Transport; Tue, 18 Apr 2023 09:24:10 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Tue, 18 Apr 2023 02:23:57 -0700 Received: from nvidia.com (10.126.230.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Tue, 18 Apr 2023 02:23:55 -0700 From: Suanming Mou To: CC: , , Subject: [RFC PATCH 5/5] crypto/mlx5: add enqueue and dequeue operations Date: Tue, 18 Apr 2023 12:23:25 +0300 Message-ID: <20230418092325.2578712-6-suanmingm@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230418092325.2578712-1-suanmingm@nvidia.com> References: <20230418092325.2578712-1-suanmingm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.126.230.37] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT039:EE_|IA1PR12MB7542:EE_ X-MS-Office365-Filtering-Correlation-Id: 16170958-0411-4c3e-3503-08db3feeaaf2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: uCqSz0XK6hfzIQV+J12MMcwmA92yvYYgUfikKn+JnuX3GKkdWvPyx+uYLGmXsxY2HgqGOUki0O0D3IXB+ITT40JQRYO2JF32EVUveebsPAj5BTF5uLBTzNMWlxre1oPng4h/f5DdP18iNf2LPubVZ30PZtFLM309yUmvfXxK5ZHxxsartKZnV7eYINI5LkO2JQsgDnpvilpngOVaKo6vulofI3lzZY+bmamNLEfo+96D43Th/nGkr5RISbF9uLQ263oq2I9w5R2tGZ4258DTmds5vtoGtm0KCVyNBUKGn/UajBZfmO6G1Oa8M9J3ZtU0cFgF7hhzNcGVv5DUO5NeCCKnbvWR40aSatnFDoX2eZ4mT1Sw9B2dkmdUYXiWZvcvod2iA3w8GqIRo6CsETzhBPgZQxrifkveTOrXlnRDIEn+mQ/pklI5q+hdhpAj0mOy1LKFylMDlDkMXwEUQP95dOMPng0IjdGUZd3HfDPGETBTn/H5pBeSoP/pt1lrYj2JSbZd32A9KiAB2kDHo0RriFYNJ45omlfu4ks1IFbNoYAFqEdUrrkxcQ6SjI79q9DMb+BnDY/JFvkaUcoyPV8LtU3062/xLs/TyFNE2ArCDMVy2C1gkIkQp7fxjqrRUkR0D6mO+kvKtMI2OYhzby8ItCYH7rBhVP3+e8wHhRmUw11jT4cgiBULVPZ3odiAr5jDk6SaYv/3MtvRUtR6EsrhomHkRIjcvzMfbn+nYK3yX0M6ORJN1TX78Dq5bLjEhBrg2FyGBULgJJVk9r8StY7wyQ+5TaIKqogEjkXOs6pb48/UMDL8O93yrb4tcHxSbsug X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(376002)(136003)(346002)(396003)(39860400002)(451199021)(36840700001)(40470700004)(46966006)(336012)(6666004)(7696005)(478600001)(86362001)(34020700004)(55016003)(2616005)(47076005)(36756003)(426003)(36860700001)(40480700001)(26005)(83380400001)(1076003)(6286002)(16526019)(186003)(40460700003)(82740400003)(7636003)(356005)(70586007)(70206006)(30864003)(316002)(2906002)(4326008)(8936002)(5660300002)(8676002)(6862004)(41300700001)(82310400005)(37006003)(54906003)(6636002)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Apr 2023 09:24:10.3792 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 16170958-0411-4c3e-3503-08db3feeaaf2 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT039.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB7542 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The crypto operations are performed with crypto WQE. If the input buffers(AAD, mbuf, digest) are not contiguous, as the requirement from FW, an UMR WQE is needed to generate contiguous address space for crypto WQE. The UMR WQE and crypto WQE are handled in two different QPs. The QP for UMR operation contains two types of WQE, UMR and SEND_EN WQE. The WQEs are built dynamically according to the crypto operation buffer address. Crypto operation with non-contiguous buffers will have its own UMR WQE, while the operation with contiguous buffers doesn't need the UMR WQE. Once the all the operations WQE in the enqueue burst built finishes, if any UMR WQEs are built, additional SEND_EN WQE will be as the final WQE of the burst in the UMR QP. The purpose of that SEND_EN WQE is to trigger the crypto QP processing with the UMR ready input memory address space buffers. The QP for crypto operations contains only the crypto WQE and the QP WQEs are built as fixed in QP setup. The QP processing is triggered by doorbell ring or the SEND_EN WQE from UMR QP. Signed-off-by: Suanming Mou --- drivers/common/mlx5/mlx5_prm.h | 1 + drivers/crypto/mlx5/mlx5_crypto.h | 2 + drivers/crypto/mlx5/mlx5_crypto_gcm.c | 401 ++++++++++++++++++++++++++ 3 files changed, 404 insertions(+) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index c8d73a8456..71000ebf02 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -613,6 +613,7 @@ struct mlx5_wqe_send_en_wqe { /* MMO metadata segment */ #define MLX5_OPCODE_MMO 0x2fu +#define MLX5_OPC_MOD_MMO_CRYPTO 0x6u #define MLX5_OPC_MOD_MMO_REGEX 0x4u #define MLX5_OPC_MOD_MMO_COMP 0x2u #define MLX5_OPC_MOD_MMO_DECOMP 0x3u diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 9945891ea8..0b0ef1a84d 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -66,8 +66,10 @@ struct mlx5_crypto_qp { uint8_t *umr_wqe; uint16_t umr_wqbbs; uint16_t umr_pi; + uint16_t umr_last_pi; uint16_t umr_ci; uint32_t umr_errors; + bool has_umr; }; struct mlx5_crypto_dek { diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c index b67f22c591..40cf4c804e 100644 --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -18,6 +19,17 @@ #include "mlx5_crypto_utils.h" #include "mlx5_crypto.h" +#define MLX5_MMO_CRYPTO_OPC (MLX5_OPCODE_MMO | \ + (MLX5_OPC_MOD_MMO_CRYPTO << WQE_CSEG_OPC_MOD_OFFSET)) + +struct mlx5_crypto_gcm_data { + void *src_addr; + uint32_t src_bytes; + void *dst_addr; + uint32_t dst_bytes; + uint32_t mkey; +}; + static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = { { .op = RTE_CRYPTO_OP_TYPE_UNDEFINED, @@ -246,6 +258,10 @@ mlx5_crypto_gcm_umr_qp_setup(struct rte_cryptodev *dev, struct mlx5_crypto_qp *q DRV_LOG(ERR, "Failed to create UMR CQ."); return -1; } + /* Init CQ to ones to be in HW owner in the start. */ + qp->umr_cq_obj.cqes[0].op_own = MLX5_CQE_OWNER_MASK; + qp->umr_cq_obj.cqes[0].wqe_counter = rte_cpu_to_be_16(UINT16_MAX); + qp->umr_last_pi = UINT16_MAX; /* Set UMR + SEND_EN WQE as maximum same with crypto. */ log_wqbb_n = rte_log2_u32(qp->entries_n * (priv->wqe_set_size / MLX5_SEND_WQE_BB)); @@ -374,6 +390,389 @@ mlx5_crypto_gcm_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, return -1; } +static __rte_always_inline bool +mlx5_crypto_is_gcm_input_continuous(struct rte_crypto_op *op) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct rte_mbuf *m_src = op->sym->m_src; + void *aad_addr = op->sym->aead.aad.data; + void *tag_addr = op->sym->aead.digest.data; + void *pkt_addr = rte_pktmbuf_mtod_offset(m_src, void *, op->sym->aead.data.offset); + + /* Out of place mode, AAD will never satisfy the expectation. */ + if ((op->sym->m_dst && op->sym->m_dst != m_src) || + (m_src->nb_segs > 1) || + (RTE_PTR_ADD(aad_addr, sess->aad_len) != pkt_addr) || + (RTE_PTR_ADD(pkt_addr, op->sym->aead.data.length) != tag_addr)) + return false; + return true; +} + +static __rte_always_inline uint32_t +mlx5_crypto_gcm_umr_klm_set(struct mlx5_crypto_qp *qp, struct rte_mbuf *mbuf, + struct mlx5_klm *klm, uint32_t offset, + uint32_t *remain) +{ + uint32_t data_len = (rte_pktmbuf_data_len(mbuf) - offset); + uintptr_t addr = rte_pktmbuf_mtod_offset(mbuf, uintptr_t, offset); + + if (data_len > *remain) + data_len = *remain; + *remain -= data_len; + klm->byte_count = rte_cpu_to_be_32(data_len); + klm->address = rte_cpu_to_be_64(addr); + klm->mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, mbuf); + return klm->mkey; +} + +static __rte_always_inline int +mlx5_crypto_gcm_build_klm(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + struct rte_mbuf *mbuf, + struct mlx5_klm *klm) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + uint32_t remain_len = op->sym->aead.data.length; + uint32_t nb_segs = mbuf->nb_segs; + uint32_t klm_n = 0; + + /* Set AAD. */ + klm->byte_count = rte_cpu_to_be_32(sess->aad_len); + klm->address = rte_cpu_to_be_64((uintptr_t)op->sym->aead.aad.data); + klm->mkey = mlx5_mr_addr2mr_bh(&qp->mr_ctrl, (uintptr_t)op->sym->aead.aad.data); + klm_n++; + /* First mbuf needs to take the data offset. */ + if (unlikely(mlx5_crypto_gcm_umr_klm_set(qp, mbuf, ++klm, + op->sym->aead.data.offset, &remain_len) == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return 0; + } + klm_n++; + while (remain_len) { + nb_segs--; + mbuf = mbuf->next; + if (unlikely(mbuf == NULL || nb_segs == 0)) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return 0; + } + if (unlikely(mlx5_crypto_gcm_umr_klm_set(qp, mbuf, ++klm, 0, + &remain_len) == UINT32_MAX)) { + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + return 0; + } + klm_n++; + } + /* Set TAG. */ + klm++; + klm->byte_count = rte_cpu_to_be_32((uint32_t)sess->tag_len); + klm->address = rte_cpu_to_be_64((uintptr_t)op->sym->aead.digest.data); + klm->mkey = mlx5_mr_addr2mr_bh(&qp->mr_ctrl, (uintptr_t)op->sym->aead.digest.data); + klm_n++; + return klm_n; +} + +static __rte_always_inline void* +mlx5_crypto_gcm_get_umr_wqe(struct mlx5_crypto_qp *qp) +{ + struct mlx5_crypto_priv *priv = qp->priv; + uint32_t wqe_offset = qp->umr_pi & (qp->umr_wqbbs - 1); + uint32_t left_wqbbs = qp->umr_wqbbs - wqe_offset; + struct mlx5_wqe_cseg *wqe; + + /* If UMR WQE is near the boundary. */ + if (left_wqbbs < priv->umr_wqe_stride) { + /* Append NOP WQE as the left WQEBBS is not enough for UMR. */ + wqe = (struct mlx5_wqe_cseg *)RTE_PTR_ADD(qp->umr_qp_obj.umem_buf, + wqe_offset * MLX5_SEND_WQE_BB); + wqe->opcode = RTE_BE32(MLX5_OPCODE_NOP | ((uint32_t)qp->umr_pi << 8)); + wqe->sq_ds = rte_cpu_to_be_32((qp->umr_qp_obj.qp->id << 8) | (left_wqbbs << 2)); + wqe->flags = RTE_BE32(0); + wqe->misc = RTE_BE32(0); + qp->umr_pi += left_wqbbs; + wqe_offset = qp->umr_pi & (qp->umr_wqbbs - 1); + } + wqe_offset *= MLX5_SEND_WQE_BB; + return RTE_PTR_ADD(qp->umr_qp_obj.umem_buf, wqe_offset); +} + +static __rte_always_inline int +mlx5_crypto_gcm_build_umr(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + uint32_t idx, + struct mlx5_crypto_gcm_data *data) +{ + struct mlx5_crypto_priv *priv = qp->priv; + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct mlx5_wqe_cseg *wqe; + struct mlx5_wqe_umr_ctrl_seg *ucseg; + struct mlx5_wqe_mkey_context_seg *mkc; + struct mlx5_klm *iklm; + struct mlx5_klm *klm = &qp->klm_array[idx * priv->max_segs_num]; + uint16_t klm_size, klm_align; + uint16_t klm_src = 0, klm_dst = 0; + uint32_t total_len = op->sym->aead.data.length + sess->aad_len + sess->tag_len; + uint32_t i; + + /* Build KLM base on the op. */ + klm_src = mlx5_crypto_gcm_build_klm(qp, op, op->sym->m_src, klm); + if (!klm_src) + return -EINVAL; + if (op->sym->m_dst && op->sym->m_dst != op->sym->m_src) { + klm_dst = mlx5_crypto_gcm_build_klm(qp, op, op->sym->m_dst, klm + klm_src); + if (!klm_dst) + return -EINVAL; + total_len *= 2; + } + klm_size = klm_src + klm_dst; + klm_align = RTE_ALIGN(klm_size, 4); + /* Get UMR WQE memory. */ + wqe = (struct mlx5_wqe_cseg *)mlx5_crypto_gcm_get_umr_wqe(qp); + memset(wqe, 0, priv->umr_wqe_size); + /* Set WQE control seg. Non-inline KLM UMR WQE size must be 9 WQE_DS. */ + wqe->opcode = RTE_BE32(MLX5_OPCODE_UMR | ((uint32_t)qp->umr_pi << 8)); + wqe->sq_ds = rte_cpu_to_be_32((qp->umr_qp_obj.qp->id << 8) | 9); + wqe->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); + wqe->misc = rte_cpu_to_be_32(qp->mkey[idx]->id); + /* Set UMR WQE control seg. */ + ucseg = (struct mlx5_wqe_umr_ctrl_seg *)(wqe + 1); + ucseg->mkey_mask |= rte_cpu_to_be_64(MLX5_WQE_UMR_CTRL_MKEY_MASK_LEN); + ucseg->klm_octowords = rte_cpu_to_be_16(klm_align); + /* Set mkey context seg. */ + mkc = (struct mlx5_wqe_mkey_context_seg *)(ucseg + 1); + mkc->len = rte_cpu_to_be_64(total_len); + mkc->qpn_mkey = rte_cpu_to_be_32(0xffffff00 | (qp->mkey[idx]->id & 0xff)); + /* Set UMR pointer to data seg. */ + iklm = (struct mlx5_klm *)(mkc + 1); + iklm->address = rte_cpu_to_be_64((uintptr_t)((char *)klm)); + iklm->mkey = rte_cpu_to_be_32(qp->klm_mr.lkey); + iklm->byte_count = rte_cpu_to_be_32(klm_align); + data->mkey = rte_cpu_to_be_32(qp->mkey[idx]->id); + data->src_addr = 0; + data->src_bytes = sess->aad_len + op->sym->aead.data.length; + data->dst_bytes = data->src_bytes; + if (klm_dst) + data->dst_addr = (void *)(uintptr_t)(data->src_bytes + sess->tag_len); + else + data->dst_addr = 0; + if (sess->op_type == MLX5_CRYPTO_OP_TYPE_ENCRYPTION) + data->dst_bytes += sess->tag_len; + else + data->src_bytes += sess->tag_len; + /* Clear the padding memory. */ + for (i = klm_size; i < klm_align; i++) { + klm[i].mkey = UINT32_MAX; + klm[i].address = 0; + klm[i].byte_count = 0; + } + /* Update PI and WQE */ + qp->umr_pi += priv->umr_wqe_stride; + qp->umr_wqe = (uint8_t *)wqe; + return 0; +} + +static __rte_always_inline void +mlx5_crypto_gcm_build_send_en(struct mlx5_crypto_qp *qp) +{ + uint32_t wqe_offset = (qp->umr_pi & (qp->umr_wqbbs - 1)) * MLX5_SEND_WQE_BB; + struct mlx5_wqe_cseg *cs = RTE_PTR_ADD(qp->umr_qp_obj.wqes, wqe_offset); + struct mlx5_wqe_qseg *qs = RTE_PTR_ADD(cs, sizeof(struct mlx5_wqe_cseg)); + + cs->opcode = RTE_BE32(MLX5_OPCODE_SEND_EN | ((uint32_t)qp->umr_pi << 8)); + cs->sq_ds = rte_cpu_to_be_32((qp->umr_qp_obj.qp->id << 8) | 2); + cs->flags = RTE_BE32((MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET) | + MLX5_WQE_CTRL_FENCE); + cs->misc = RTE_BE32(0); + qs->max_index = rte_cpu_to_be_32(qp->pi); + qs->qpn_cqn = rte_cpu_to_be_32(qp->qp_obj.qp->id); + qp->umr_wqe = (uint8_t *)cs; + qp->umr_pi += 1; +} + +static __rte_always_inline void +mlx5_crypto_gcm_wqe_set(struct mlx5_crypto_qp *qp, + struct rte_crypto_op *op, + uint32_t idx, + struct mlx5_crypto_gcm_data *data) +{ + struct mlx5_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + struct mlx5_gga_wqe *wqe = &((struct mlx5_gga_wqe *)qp->qp_obj.wqes)[idx]; + union mlx5_gga_crypto_opaque *opaq = qp->opaque_mr.addr; + + memcpy(opaq[idx].cp.iv, + rte_crypto_op_ctod_offset(op, uint8_t *, sess->iv_offset), sess->iv_len); + opaq[idx].cp.tag_size = rte_cpu_to_be_32((uint32_t)sess->tag_len); + opaq[idx].cp.aad_size = rte_cpu_to_be_32((uint32_t)sess->aad_len); + /* Update control seg. */ + wqe->opcode = rte_cpu_to_be_32(MLX5_MMO_CRYPTO_OPC + (qp->pi << 8)); + wqe->gga_ctrl1 = sess->mmo_ctrl; + wqe->gga_ctrl2 = sess->dek_id; + /* Update input seg. */ + wqe->gather.bcount = rte_cpu_to_be_32(data->src_bytes); + wqe->gather.lkey = data->mkey; + wqe->gather.pbuf = rte_cpu_to_be_64((uintptr_t)data->src_addr); + /* Update output seg. */ + wqe->scatter.bcount = rte_cpu_to_be_32(data->dst_bytes); + wqe->scatter.lkey = data->mkey; + wqe->scatter.pbuf = rte_cpu_to_be_64((uintptr_t)data->dst_addr); + qp->wqe = (uint8_t *)wqe; +} + +static uint16_t +mlx5_crypto_gcm_enqueue_burst(void *queue_pair, + struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct mlx5_crypto_qp *qp = queue_pair; + struct mlx5_crypto_session *sess; + struct mlx5_crypto_priv *priv = qp->priv; + struct mlx5_crypto_gcm_data gcm_data; + struct rte_crypto_op *op; + uint16_t mask = qp->entries_n - 1; + uint16_t remain = qp->entries_n - (qp->pi - qp->ci); + uint32_t idx; + uint16_t umr_cnt = 0; + + if (remain < nb_ops) + nb_ops = remain; + else + remain = nb_ops; + if (unlikely(remain == 0)) + return 0; + do { + op = *ops++; + sess = CRYPTODEV_GET_SYM_SESS_PRIV(op->sym->session); + idx = qp->pi & mask; + if (mlx5_crypto_is_gcm_input_continuous(op)) { + gcm_data.src_addr = op->sym->aead.aad.data; + gcm_data.src_bytes = op->sym->aead.data.length + sess->aad_len; + gcm_data.dst_addr = gcm_data.src_addr; + gcm_data.dst_bytes = gcm_data.src_bytes; + if (sess->op_type == MLX5_CRYPTO_OP_TYPE_ENCRYPTION) + gcm_data.dst_bytes += sess->tag_len; + else + gcm_data.src_bytes += sess->tag_len; + gcm_data.mkey = mlx5_mr_mb2mr(&qp->mr_ctrl, op->sym->m_src); + } else { + if (unlikely(mlx5_crypto_gcm_build_umr(qp, op, idx, &gcm_data))) { + qp->stats.enqueue_err_count++; + if (remain != nb_ops) { + qp->stats.enqueued_count -= remain; + break; + } + return 0; + } + umr_cnt++; + } + mlx5_crypto_gcm_wqe_set(qp, op, idx, &gcm_data); + qp->ops[idx] = op; + qp->pi++; + } while (--remain); + qp->stats.enqueued_count += nb_ops; + if (!umr_cnt) { + mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->wqe, + qp->pi, &qp->qp_obj.db_rec[MLX5_SND_DBR], + !priv->uar.dbnc); + } else { + mlx5_crypto_gcm_build_send_en(qp); + mlx5_doorbell_ring(&priv->uar.bf_db, *(volatile uint64_t *)qp->umr_wqe, + qp->umr_pi, &qp->umr_qp_obj.db_rec[MLX5_SND_DBR], + !priv->uar.dbnc); + } + qp->has_umr = !!umr_cnt; + return nb_ops; +} + +static __rte_noinline void +mlx5_crypto_gcm_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op) +{ + const uint32_t idx = qp->ci & (qp->entries_n - 1); + volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) + &qp->cq_obj.cqes[idx]; + + if (op) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + qp->stats.dequeue_err_count++; + DRV_LOG(ERR, "CQE ERR:%x.\n", rte_be_to_cpu_32(cqe->syndrome)); +} + +static __rte_always_inline void +mlx5_crypto_gcm_umr_cq_poll(struct mlx5_crypto_qp *qp) +{ + union { + struct { + uint16_t wqe_counter; + uint8_t rsvd5; + uint8_t op_own; + }; + uint32_t word; + } last_word; + uint16_t cur_wqe_counter; + + if (!qp->has_umr) + return; + last_word.word = rte_read32(&qp->umr_cq_obj.cqes[0].wqe_counter); + cur_wqe_counter = rte_be_to_cpu_16(last_word.wqe_counter); + if (cur_wqe_counter == qp->umr_last_pi) + return; + MLX5_ASSERT(MLX5_CQE_OPCODE(last_word.op_own) != + MLX5_CQE_INVALID); + if (unlikely((MLX5_CQE_OPCODE(last_word.op_own) == + MLX5_CQE_RESP_ERR || + MLX5_CQE_OPCODE(last_word.op_own) == + MLX5_CQE_REQ_ERR))) + qp->umr_errors++; + qp->umr_last_pi = cur_wqe_counter; + qp->umr_ci++; + rte_io_wmb(); + /* Ring CQ doorbell record. */ + qp->umr_cq_obj.db_rec[0] = rte_cpu_to_be_32(qp->umr_ci); + qp->has_umr = false; +} + +static uint16_t +mlx5_crypto_gcm_dequeue_burst(void *queue_pair, + struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct mlx5_crypto_qp *qp = queue_pair; + volatile struct mlx5_cqe *restrict cqe; + struct rte_crypto_op *restrict op; + const unsigned int cq_size = qp->entries_n; + const unsigned int mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = qp->ci & mask; + const uint16_t max = RTE_MIN((uint16_t)(qp->pi - qp->ci), nb_ops); + uint16_t i = 0; + int ret; + + if (unlikely(max == 0)) + return 0; + /* Handle UMR CQE firstly.*/ + mlx5_crypto_gcm_umr_cq_poll(qp); + do { + idx = next_idx; + next_idx = (qp->ci + 1) & mask; + op = qp->ops[idx]; + cqe = &qp->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, qp->ci); + rte_io_rmb(); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (unlikely(ret != MLX5_CQE_STATUS_HW_OWN)) + mlx5_crypto_gcm_cqe_err_handle(qp, op); + break; + } + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + ops[i++] = op; + qp->ci++; + } while (i < max); + if (likely(i != 0)) { + rte_io_wmb(); + qp->cq_obj.db_rec[0] = rte_cpu_to_be_32(qp->ci); + qp->stats.dequeued_count += i; + } + return i; +} + int mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) { @@ -386,6 +785,8 @@ mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv) dev_ops->sym_session_configure = mlx5_crypto_sym_gcm_session_configure; dev_ops->queue_pair_setup = mlx5_crypto_gcm_qp_setup; dev_ops->queue_pair_release = mlx5_crypto_gcm_qp_release; + crypto_dev->dequeue_burst = mlx5_crypto_gcm_dequeue_burst; + crypto_dev->enqueue_burst = mlx5_crypto_gcm_enqueue_burst; /* Generate GCM capability. */ ret = mlx5_crypto_generate_gcm_cap(&cdev->config.hca_attr.crypto_mmo, mlx5_crypto_gcm_caps);