crypto/mlx5: add max segment assert
Checks
Commit Message
Currently, for multi-segment mbuf, before crypto WQE an extra
UMR WQE will be introduced to build the contiguous memory space.
Crypto WQE uses that contiguous memory space key as input.
This commit adds assert for maximum supported segments in debug
mode in case the segments exceed UMR's limitation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/crypto/mlx5/mlx5_crypto_gcm.c | 6 ++++++
1 file changed, 6 insertions(+)
Comments
The Community Lab had an infra failure this morning and some patches
including yours were affected with false failures. The issue is now
resolved and we are rerunning the tests in question for all patches
submitted today.
On Fri, Mar 1, 2024 at 7:43 AM Suanming Mou <suanmingm@nvidia.com> wrote:
> Currently, for multi-segment mbuf, before crypto WQE an extra
> UMR WQE will be introduced to build the contiguous memory space.
> Crypto WQE uses that contiguous memory space key as input.
>
> This commit adds assert for maximum supported segments in debug
> mode in case the segments exceed UMR's limitation.
>
> Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> ---
> drivers/crypto/mlx5/mlx5_crypto_gcm.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c
> b/drivers/crypto/mlx5/mlx5_crypto_gcm.c
> index 8b9953b46d..fc6ade6711 100644
> --- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c
> +++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c
> @@ -441,6 +441,9 @@ mlx5_crypto_gcm_get_op_info(struct mlx5_crypto_qp *qp,
> op_info->digest = NULL;
> op_info->src_addr = aad_addr;
> if (op->sym->m_dst && op->sym->m_dst != m_src) {
> + /* Add 2 for AAD and digest. */
> + MLX5_ASSERT((uint32_t)(m_dst->nb_segs + m_src->nb_segs +
> 2) <
> + qp->priv->max_klm_num);
> op_info->is_oop = true;
> m_dst = op->sym->m_dst;
> dst_addr = rte_pktmbuf_mtod_offset(m_dst, void *,
> op->sym->aead.data.offset);
> @@ -457,6 +460,9 @@ mlx5_crypto_gcm_get_op_info(struct mlx5_crypto_qp *qp,
> op_info->need_umr = true;
> return;
> }
> + } else {
> + /* Add 2 for AAD and digest. */
> + MLX5_ASSERT((uint32_t)(m_src->nb_segs) + 2 <
> qp->priv->max_klm_num);
> }
> if (m_src->nb_segs > 1) {
> op_info->need_umr = true;
> --
> 2.34.1
>
>
> Currently, for multi-segment mbuf, before crypto WQE an extra
> UMR WQE will be introduced to build the contiguous memory space.
> Crypto WQE uses that contiguous memory space key as input.
>
> This commit adds assert for maximum supported segments in debug
> mode in case the segments exceed UMR's limitation.
>
> Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
Applied to dpdk-next-crypto
Thanks.
@@ -441,6 +441,9 @@ mlx5_crypto_gcm_get_op_info(struct mlx5_crypto_qp *qp,
op_info->digest = NULL;
op_info->src_addr = aad_addr;
if (op->sym->m_dst && op->sym->m_dst != m_src) {
+ /* Add 2 for AAD and digest. */
+ MLX5_ASSERT((uint32_t)(m_dst->nb_segs + m_src->nb_segs + 2) <
+ qp->priv->max_klm_num);
op_info->is_oop = true;
m_dst = op->sym->m_dst;
dst_addr = rte_pktmbuf_mtod_offset(m_dst, void *, op->sym->aead.data.offset);
@@ -457,6 +460,9 @@ mlx5_crypto_gcm_get_op_info(struct mlx5_crypto_qp *qp,
op_info->need_umr = true;
return;
}
+ } else {
+ /* Add 2 for AAD and digest. */
+ MLX5_ASSERT((uint32_t)(m_src->nb_segs) + 2 < qp->priv->max_klm_num);
}
if (m_src->nb_segs > 1) {
op_info->need_umr = true;