[RFC,1/5] crypto/mlx5: add AES-GCM capability
Checks
Commit Message
AES-GCM provides both authenticated encryption and the ability to check
the integrity and authentication of additional authenticated data (AAD)
that is sent in the clear.
This commit adds the AES-GCM capability query and check. An new devarg
"algo" is added to identify if the crypto PMD will be initialized as
AES-GCM(algo=1) or AES-XTS(algo=0, default).
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
---
doc/guides/nics/mlx5.rst | 8 +++
drivers/common/mlx5/mlx5_devx_cmds.c | 17 +++++
drivers/common/mlx5/mlx5_devx_cmds.h | 14 ++++
drivers/common/mlx5/mlx5_prm.h | 19 ++++-
drivers/crypto/mlx5/meson.build | 1 +
drivers/crypto/mlx5/mlx5_crypto.c | 30 +++++++-
drivers/crypto/mlx5/mlx5_crypto.h | 5 ++
drivers/crypto/mlx5/mlx5_crypto_gcm.c | 100 ++++++++++++++++++++++++++
8 files changed, 189 insertions(+), 5 deletions(-)
create mode 100644 drivers/crypto/mlx5/mlx5_crypto_gcm.c
Comments
> Subject: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
>
> AES-GCM provides both authenticated encryption and the ability to check
> the integrity and authentication of additional authenticated data (AAD)
> that is sent in the clear.
>
> This commit adds the AES-GCM capability query and check. An new devarg
> "algo" is added to identify if the crypto PMD will be initialized as
> AES-GCM(algo=1) or AES-XTS(algo=0, default).
Why do you need a devarg for identifying the algorithm?
Is it not sufficient to use enums rte_crypto_aead_algorithm and
rte_crypto_cipher_algorithm?
Devargs are normally added for things which are specific to a particular PMD
And which is not exposed via public APIs.
For identification of algo, it is not needed to use devargs.
>
> Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
> ---
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, May 17, 2023 3:37 PM
> To: Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>; dev@dpdk.org; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
>
> > Subject: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
> >
> > AES-GCM provides both authenticated encryption and the ability to
> > check the integrity and authentication of additional authenticated
> > data (AAD) that is sent in the clear.
> >
> > This commit adds the AES-GCM capability query and check. An new devarg
> > "algo" is added to identify if the crypto PMD will be initialized as
> > AES-GCM(algo=1) or AES-XTS(algo=0, default).
>
> Why do you need a devarg for identifying the algorithm?
> Is it not sufficient to use enums rte_crypto_aead_algorithm and
> rte_crypto_cipher_algorithm?
>
> Devargs are normally added for things which are specific to a particular PMD And
> which is not exposed via public APIs.
> For identification of algo, it is not needed to use devargs.
Due to current HW limitation, the NIC can only be initialized as GCM or XTS working mode during probe. It's not able to provide both in running time. That's the main reason for the devarg.
Session configure with algo is too late.
>
> >
> > Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
> > ---
> > Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
> >
> > > Subject: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
> > >
> > > AES-GCM provides both authenticated encryption and the ability to
> > > check the integrity and authentication of additional authenticated
> > > data (AAD) that is sent in the clear.
> > >
> > > This commit adds the AES-GCM capability query and check. An new devarg
> > > "algo" is added to identify if the crypto PMD will be initialized as
> > > AES-GCM(algo=1) or AES-XTS(algo=0, default).
> >
> > Why do you need a devarg for identifying the algorithm?
> > Is it not sufficient to use enums rte_crypto_aead_algorithm and
> > rte_crypto_cipher_algorithm?
> >
> > Devargs are normally added for things which are specific to a particular PMD
> And
> > which is not exposed via public APIs.
> > For identification of algo, it is not needed to use devargs.
> Due to current HW limitation, the NIC can only be initialized as GCM or XTS
> working mode during probe. It's not able to provide both in running time. That's
> the main reason for the devarg.
> Session configure with algo is too late.
Is it not possible to reconfigure the NIC when GCM is detected in session create?
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, May 17, 2023 3:47 PM
> To: Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>; dev@dpdk.org; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
>
> > > Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM
> > > capability
> > >
> > > > Subject: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
> > > >
> > > > AES-GCM provides both authenticated encryption and the ability to
> > > > check the integrity and authentication of additional authenticated
> > > > data (AAD) that is sent in the clear.
> > > >
> > > > This commit adds the AES-GCM capability query and check. An new
> > > > devarg "algo" is added to identify if the crypto PMD will be
> > > > initialized as
> > > > AES-GCM(algo=1) or AES-XTS(algo=0, default).
> > >
> > > Why do you need a devarg for identifying the algorithm?
> > > Is it not sufficient to use enums rte_crypto_aead_algorithm and
> > > rte_crypto_cipher_algorithm?
> > >
> > > Devargs are normally added for things which are specific to a
> > > particular PMD
> > And
> > > which is not exposed via public APIs.
> > > For identification of algo, it is not needed to use devargs.
> > Due to current HW limitation, the NIC can only be initialized as GCM
> > or XTS working mode during probe. It's not able to provide both in
> > running time. That's the main reason for the devarg.
> > Session configure with algo is too late.
>
> Is it not possible to reconfigure the NIC when GCM is detected in session create?
That means in dev info, we need to put both XTS and GCM in the capability. But the fact is if we reconfigure the NIC to GCM, XTS will not be supported. If user wants to create both XTS and GCM session, one of them will fail.
>
> > Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
> >
> > > > Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM
> > > > capability
> > > >
> > > > > Subject: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
> > > > >
> > > > > AES-GCM provides both authenticated encryption and the ability to
> > > > > check the integrity and authentication of additional authenticated
> > > > > data (AAD) that is sent in the clear.
> > > > >
> > > > > This commit adds the AES-GCM capability query and check. An new
> > > > > devarg "algo" is added to identify if the crypto PMD will be
> > > > > initialized as
> > > > > AES-GCM(algo=1) or AES-XTS(algo=0, default).
> > > >
> > > > Why do you need a devarg for identifying the algorithm?
> > > > Is it not sufficient to use enums rte_crypto_aead_algorithm and
> > > > rte_crypto_cipher_algorithm?
> > > >
> > > > Devargs are normally added for things which are specific to a
> > > > particular PMD
> > > And
> > > > which is not exposed via public APIs.
> > > > For identification of algo, it is not needed to use devargs.
> > > Due to current HW limitation, the NIC can only be initialized as GCM
> > > or XTS working mode during probe. It's not able to provide both in
> > > running time. That's the main reason for the devarg.
> > > Session configure with algo is too late.
> >
> > Is it not possible to reconfigure the NIC when GCM is detected in session
> create?
> That means in dev info, we need to put both XTS and GCM in the capability. But
> the fact is if we reconfigure the NIC to GCM, XTS will not be supported. If user
> wants to create both XTS and GCM session, one of them will fail.
That would fail even in current patchset.
On another thought, is it not good to create 2 separate instances of drivers in same
folder, like ipsec_mb and cnxk drivers are organized.
You can change the function pointers based on the driver instance(mlx5_gcm, mlx5_xts)
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, May 17, 2023 4:03 PM
> To: Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>; dev@dpdk.org; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM capability
>
> > > Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM
> > > capability
> > >
> > > > > Subject: RE: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM
> > > > > capability
> > > > >
> > > > > > Subject: [EXT] [RFC PATCH 1/5] crypto/mlx5: add AES-GCM
> > > > > > capability
> > > > > >
> > > > > > AES-GCM provides both authenticated encryption and the ability
> > > > > > to check the integrity and authentication of additional
> > > > > > authenticated data (AAD) that is sent in the clear.
> > > > > >
> > > > > > This commit adds the AES-GCM capability query and check. An
> > > > > > new devarg "algo" is added to identify if the crypto PMD will
> > > > > > be initialized as
> > > > > > AES-GCM(algo=1) or AES-XTS(algo=0, default).
> > > > >
> > > > > Why do you need a devarg for identifying the algorithm?
> > > > > Is it not sufficient to use enums rte_crypto_aead_algorithm and
> > > > > rte_crypto_cipher_algorithm?
> > > > >
> > > > > Devargs are normally added for things which are specific to a
> > > > > particular PMD
> > > > And
> > > > > which is not exposed via public APIs.
> > > > > For identification of algo, it is not needed to use devargs.
> > > > Due to current HW limitation, the NIC can only be initialized as
> > > > GCM or XTS working mode during probe. It's not able to provide
> > > > both in running time. That's the main reason for the devarg.
> > > > Session configure with algo is too late.
> > >
> > > Is it not possible to reconfigure the NIC when GCM is detected in
> > > session
> > create?
> > That means in dev info, we need to put both XTS and GCM in the
> > capability. But the fact is if we reconfigure the NIC to GCM, XTS
> > will not be supported. If user wants to create both XTS and GCM session, one of
> them will fail.
>
> That would fail even in current patchset.
> On another thought, is it not good to create 2 separate instances of drivers in
> same folder, like ipsec_mb and cnxk drivers are organized.
> You can change the function pointers based on the driver instance(mlx5_gcm,
> mlx5_xts)
Current, we are initial the capability based on the algo, so it will not fail.
Regarding separate the instance, yes, we will do that in the next version. We will reuse the most of the common code in mlx5_crypto.c, mlx5_crypto_gcm.c for GCM and mlx5_crypto_xts.c for XTS.
@@ -1270,6 +1270,14 @@ for an additional list of options shared with other mlx5 drivers.
Set to zero by default.
+- ``algo`` parameter [int]
+
+ - 0. AES-XTS crypto.
+
+ - 1. AES-GCM crypto.
+
+ Set to zero(AES-XTS) by default.
+
Supported NICs
--------------
@@ -1197,6 +1197,23 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
attr->crypto_wrapped_import_method = !!(MLX5_GET(crypto_caps,
hcattr, wrapped_import_method)
& 1 << 2);
+ attr->sw_wrapped_dek = MLX5_GET(crypto_caps, hcattr, sw_wrapped_dek_key_purpose) ?
+ MLX5_GET(crypto_caps, hcattr, sw_wrapped_dek_new) : 0;
+ attr->crypto_mmo.crypto_mmo_qp = MLX5_GET(crypto_caps, hcattr, crypto_mmo_qp);
+ attr->crypto_mmo.gcm_256_encrypt =
+ MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_256_encrypt);
+ attr->crypto_mmo.gcm_128_encrypt =
+ MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_128_encrypt);
+ attr->crypto_mmo.gcm_256_decrypt =
+ MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_256_decrypt);
+ attr->crypto_mmo.gcm_128_decrypt =
+ MLX5_GET(crypto_caps, hcattr, crypto_aes_gcm_128_decrypt);
+ attr->crypto_mmo.gcm_auth_tag_128 =
+ MLX5_GET(crypto_caps, hcattr, gcm_auth_tag_128);
+ attr->crypto_mmo.gcm_auth_tag_96 =
+ MLX5_GET(crypto_caps, hcattr, gcm_auth_tag_96);
+ attr->crypto_mmo.log_crypto_mmo_max_size =
+ MLX5_GET(crypto_caps, hcattr, log_crypto_mmo_max_size);
}
if (hca_cap_2_sup) {
hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc,
@@ -153,6 +153,18 @@ struct mlx5_hca_ipsec_attr {
struct mlx5_hca_ipsec_reformat_attr reformat_fdb;
};
+__extension__
+struct mlx5_hca_crypto_mmo_attr {
+ uint32_t crypto_mmo_qp:1;
+ uint32_t gcm_256_encrypt:1;
+ uint32_t gcm_128_encrypt:1;
+ uint32_t gcm_256_decrypt:1;
+ uint32_t gcm_128_decrypt:1;
+ uint32_t gcm_auth_tag_128:1;
+ uint32_t gcm_auth_tag_96:1;
+ uint32_t log_crypto_mmo_max_size:6;
+};
+
/* ISO C restricts enumerator values to range of 'int' */
__extension__
enum {
@@ -266,6 +278,7 @@ struct mlx5_hca_attr {
uint32_t import_kek:1; /* General obj type IMPORT_KEK supported. */
uint32_t credential:1; /* General obj type CREDENTIAL supported. */
uint32_t crypto_login:1; /* General obj type CRYPTO_LOGIN supported. */
+ uint32_t sw_wrapped_dek:16; /* DEKs wrapped by SW are supported */
uint32_t regexp_num_of_engines;
uint32_t log_max_ft_sampler_num:8;
uint32_t inner_ipv4_ihl:1;
@@ -281,6 +294,7 @@ struct mlx5_hca_attr {
struct mlx5_hca_flow_attr flow;
struct mlx5_hca_flex_attr flex;
struct mlx5_hca_ipsec_attr ipsec;
+ struct mlx5_hca_crypto_mmo_attr crypto_mmo;
int log_max_qp_sz;
int log_max_cq_sz;
int log_max_qp;
@@ -4654,7 +4654,9 @@ struct mlx5_ifc_crypto_caps_bits {
u8 synchronize_dek[0x1];
u8 int_kek_manual[0x1];
u8 int_kek_auto[0x1];
- u8 reserved_at_6[0x12];
+ u8 reserved_at_6[0xd];
+ u8 sw_wrapped_dek_key_purpose[0x1];
+ u8 reserved_at_14[0x4];
u8 wrapped_import_method[0x8];
u8 reserved_at_20[0x3];
u8 log_dek_max_alloc[0x5];
@@ -4671,8 +4673,19 @@ struct mlx5_ifc_crypto_caps_bits {
u8 log_dek_granularity[0x5];
u8 reserved_at_68[0x3];
u8 log_max_num_int_kek[0x5];
- u8 reserved_at_70[0x10];
- u8 reserved_at_80[0x780];
+ u8 sw_wrapped_dek_new[0x10];
+ u8 reserved_at_80[0x80];
+ u8 crypto_mmo_qp[0x1];
+ u8 crypto_aes_gcm_256_encrypt[0x1];
+ u8 crypto_aes_gcm_128_encrypt[0x1];
+ u8 crypto_aes_gcm_256_decrypt[0x1];
+ u8 crypto_aes_gcm_128_decrypt[0x1];
+ u8 gcm_auth_tag_128[0x1];
+ u8 gcm_auth_tag_96[0x1];
+ u8 reserved_at_107[0x3];
+ u8 log_crypto_mmo_max_size[0x6];
+ u8 reserved_at_110[0x10];
+ u8 reserved_at_120[0x6d0];
};
struct mlx5_ifc_crypto_commissioning_register_bits {
@@ -15,6 +15,7 @@ endif
sources = files(
'mlx5_crypto.c',
+ 'mlx5_crypto_gcm.c',
'mlx5_crypto_dek.c',
)
@@ -23,6 +23,13 @@
#define MLX5_CRYPTO_MAX_QPS 128
#define MLX5_CRYPTO_MAX_SEGS 56
+enum mlx5_crypto_pmd_support_algo {
+ MLX5_CRYPTO_PMD_SUPPORT_ALGO_NULL,
+ MLX5_CRYPTO_PMD_SUPPORT_ALGO_AES_XTS,
+ MLX5_CRYPTO_PMD_SUPPORT_ALGO_AES_GCM,
+ MLX5_CRYPTO_PMD_SUPPORT_ALGO_MAX,
+};
+
#define MLX5_CRYPTO_FEATURE_FLAGS(wrapped_mode) \
(RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | RTE_CRYPTODEV_FF_HW_ACCELERATED | \
RTE_CRYPTODEV_FF_IN_PLACE_SGL | RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | \
@@ -102,7 +109,7 @@ mlx5_crypto_dev_infos_get(struct rte_cryptodev *dev,
dev_info->driver_id = mlx5_crypto_driver_id;
dev_info->feature_flags =
MLX5_CRYPTO_FEATURE_FLAGS(priv->is_wrapped_mode);
- dev_info->capabilities = mlx5_crypto_caps;
+ dev_info->capabilities = priv->caps;
dev_info->max_nb_queue_pairs = MLX5_CRYPTO_MAX_QPS;
dev_info->min_mbuf_headroom_req = 0;
dev_info->min_mbuf_tailroom_req = 0;
@@ -749,6 +756,14 @@ mlx5_crypto_args_check_handler(const char *key, const char *val, void *opaque)
attr->credential_pointer = (uint32_t)tmp;
} else if (strcmp(key, "keytag") == 0) {
devarg_prms->keytag = tmp;
+ } else if (strcmp(key, "algo") == 0) {
+ if (tmp == 1) {
+ devarg_prms->is_aes_gcm = 1;
+ } else if (tmp > 1) {
+ DRV_LOG(ERR, "Invalid algo.");
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
}
return 0;
}
@@ -765,6 +780,7 @@ mlx5_crypto_parse_devargs(struct mlx5_kvargs_ctrl *mkvlist,
"keytag",
"max_segs_num",
"wcs_file",
+ "algo",
NULL,
};
@@ -895,7 +911,9 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev,
rte_errno = ENOTSUP;
return -rte_errno;
}
- if (!cdev->config.hca_attr.crypto || !cdev->config.hca_attr.aes_xts) {
+ if (!cdev->config.hca_attr.crypto ||
+ (!cdev->config.hca_attr.aes_xts &&
+ !cdev->config.hca_attr.crypto_mmo.crypto_mmo_qp)) {
DRV_LOG(ERR, "Not enough capabilities to support crypto "
"operations, maybe old FW/OFED version?");
rte_errno = ENOTSUP;
@@ -924,6 +942,14 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev,
priv->cdev = cdev;
priv->crypto_dev = crypto_dev;
priv->is_wrapped_mode = wrapped_mode;
+ priv->caps = mlx5_crypto_caps;
+ /* Init and override AES-GCM configuration. */
+ if (devarg_prms.is_aes_gcm) {
+ ret = mlx5_crypto_gcm_init(priv);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to init AES-GCM crypto.");
+ }
+ }
if (mlx5_devx_uar_prepare(cdev, &priv->uar) != 0) {
rte_cryptodev_pmd_destroy(priv->crypto_dev);
return -1;
@@ -31,6 +31,7 @@ struct mlx5_crypto_priv {
struct mlx5_uar uar; /* User Access Region. */
uint32_t max_segs_num; /* Maximum supported data segs. */
struct mlx5_hlist *dek_hlist; /* Dek hash list. */
+ const struct rte_cryptodev_capabilities *caps;
struct rte_cryptodev_config dev_config;
struct mlx5_devx_obj *login_obj;
uint64_t keytag;
@@ -68,6 +69,7 @@ struct mlx5_crypto_devarg_params {
struct mlx5_devx_crypto_login_attr login_attr;
uint64_t keytag;
uint32_t max_segs_num;
+ uint32_t is_aes_gcm:1;
};
int
@@ -84,4 +86,7 @@ mlx5_crypto_dek_setup(struct mlx5_crypto_priv *priv);
void
mlx5_crypto_dek_unset(struct mlx5_crypto_priv *priv);
+int
+mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv);
+
#endif /* MLX5_CRYPTO_H_ */
new file mode 100644
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 NVIDIA Corporation & Affiliates
+ */
+
+#include <rte_malloc.h>
+#include <rte_mempool.h>
+#include <rte_eal_paging.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+#include <bus_pci_driver.h>
+#include <rte_memory.h>
+
+#include <mlx5_glue.h>
+#include <mlx5_common.h>
+#include <mlx5_devx_cmds.h>
+#include <mlx5_common_os.h>
+
+#include "mlx5_crypto_utils.h"
+#include "mlx5_crypto.h"
+
+static struct rte_cryptodev_capabilities mlx5_crypto_gcm_caps[] = {
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ },
+ {
+ .op = RTE_CRYPTO_OP_TYPE_UNDEFINED,
+ }
+};
+
+static int
+mlx5_crypto_generate_gcm_cap(struct mlx5_hca_crypto_mmo_attr *mmo_attr,
+ struct rte_cryptodev_capabilities *cap)
+{
+ /* Init key size. */
+ if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt &&
+ mmo_attr->gcm_256_encrypt && mmo_attr->gcm_256_decrypt) {
+ cap->sym.aead.key_size.min = 16;
+ cap->sym.aead.key_size.max = 32;
+ cap->sym.aead.key_size.increment = 16;
+ } else if (mmo_attr->gcm_256_encrypt && mmo_attr->gcm_256_decrypt) {
+ cap->sym.aead.key_size.min = 32;
+ cap->sym.aead.key_size.max = 32;
+ cap->sym.aead.key_size.increment = 0;
+ } else if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt) {
+ cap->sym.aead.key_size.min = 16;
+ cap->sym.aead.key_size.max = 16;
+ cap->sym.aead.key_size.increment = 0;
+ } else {
+ DRV_LOG(ERR, "No available AES-GCM encryption/decryption supported.");
+ return -1;
+ }
+ /* Init tag size. */
+ if (mmo_attr->gcm_auth_tag_128 && mmo_attr->gcm_auth_tag_128) {
+ cap->sym.aead.digest_size.min = 8;
+ cap->sym.aead.digest_size.max = 16;
+ cap->sym.aead.digest_size.increment = 8;
+ } else if (mmo_attr->gcm_auth_tag_128) {
+ cap->sym.aead.digest_size.min = 8;
+ cap->sym.aead.digest_size.max = 8;
+ cap->sym.aead.digest_size.increment = 0;
+ } else if (mmo_attr->gcm_128_encrypt && mmo_attr->gcm_128_decrypt) {
+ cap->sym.aead.digest_size.min = 16;
+ cap->sym.aead.digest_size.max = 16;
+ cap->sym.aead.digest_size.increment = 0;
+ } else {
+ DRV_LOG(ERR, "No available AES-GCM tag size supported.");
+ return -1;
+ }
+ /* Init AAD size. */
+ cap->sym.aead.aad_size.min = 0;
+ cap->sym.aead.aad_size.max = UINT16_MAX;
+ cap->sym.aead.aad_size.increment = 1;
+ /* Init IV size. */
+ cap->sym.aead.iv_size.min = 12;
+ cap->sym.aead.iv_size.max = 12;
+ cap->sym.aead.iv_size.increment = 0;
+ /* Init left items. */
+ cap->op = RTE_CRYPTO_OP_TYPE_SYMMETRIC;
+ cap->sym.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD;
+ cap->sym.aead.algo = RTE_CRYPTO_AEAD_AES_GCM;
+ return 0;
+}
+
+int
+mlx5_crypto_gcm_init(struct mlx5_crypto_priv *priv)
+{
+ struct mlx5_common_device *cdev = priv->cdev;
+ int ret;
+
+ /* Generate GCM capability. */
+ ret = mlx5_crypto_generate_gcm_cap(&cdev->config.hca_attr.crypto_mmo,
+ mlx5_crypto_gcm_caps);
+ if (ret) {
+ DRV_LOG(ERR, "No enough AES-GCM cap.");
+ return -1;
+ }
+ priv->caps = mlx5_crypto_gcm_caps;
+ return 0;
+}
+