From patchwork Thu Sep 7 15:35:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 131233 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 782AE42537; Thu, 7 Sep 2023 17:36:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3820D4026C; Thu, 7 Sep 2023 17:36:10 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id ED831400EF; Thu, 7 Sep 2023 17:36:07 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694100968; x=1725636968; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=OqHfc7CupfhCGtcJyKGUwtGpEudqxoCUENpwYMUPXxU=; b=UVpizHFATeEmqab2SX9tynCi139nKb1XYyOXfK7zrWBkTI+fT4bvxeaH jOGpSdO/UHmpLivhD0ejE+YRo52JnCoc3ywAUOIXWzYz28c6fjvd1haWe MjEU3+RwmK1JuKCuD1M11HFT/zbW6qEXqDUTgGx/hWRay63vqDNZtE8P6 8LH/zg/iuviNTCzN2jzVlzUvZZ7CnOKFw78jBQEAcBod+KAvMKZ+jR4zS oLTIWRzFMuFqhMUCyMy3pde6esgv6wIGcaqwsQTSkpz5PJfDIcj4PAZq9 YXMd0FAFXvF6VI1doU7+r0FlHmt1ZF4fAg59867k0HQfMJd1tctTNU91m A==; X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="374776140" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="374776140" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2023 08:36:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="812179849" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="812179849" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by fmsmga004.fm.intel.com with ESMTP; 07 Sep 2023 08:36:05 -0700 From: Ciara Power To: Kai Ji Cc: dev@dpdk.org, arkadiuszx.kusztal@intel.com, venkatx.sivaramakrishnan@intel.com, Ciara Power , stable@dpdk.org Subject: [PATCH] crypto/qat: fix raw API null algorithm digest Date: Thu, 7 Sep 2023 15:35:53 +0000 Message-Id: <20230907153553.1631938-1-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org QAT HW generates bytes of 0x00 digest, even when a digest of len 0 is requested for NULL. This caused test failures when the test vector had digest len 0, as the buffer has unexpected changed bytes. By placing the digest into the cookie for NULL authentication, the buffer remains unchanged as expected, and the digest is placed to the side, as it won't be used anyway. This fix was previously added for the main QAT code path, but it also needs to be included for the raw API code path. Fixes: db0e952a5c01 ("crypto/qat: add NULL capability") Cc: stable@dpdk.org Signed-off-by: Ciara Power --- drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 19 ++++++++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 41 +++++++++++++++++--- 2 files changed, 53 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c index d25e1b2f3a..0a939161f9 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c @@ -637,6 +637,8 @@ qat_sym_dp_enqueue_single_auth_gen3(void *qp_data, uint8_t *drv_ctx, struct icp_qat_fw_la_bulk_req *req; int32_t data_len; uint32_t tail = dp_ctx->tail; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = digest; req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); @@ -650,7 +652,12 @@ qat_sym_dp_enqueue_single_auth_gen3(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) return -1; - enqueue_one_auth_job_gen3(ctx, cookie, req, digest, auth_iv, ofs, + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } + + enqueue_one_auth_job_gen3(ctx, cookie, req, job_digest, auth_iv, ofs, (uint32_t)data_len); dp_ctx->tail = tail; @@ -672,6 +679,8 @@ qat_sym_dp_enqueue_auth_jobs_gen3(void *qp_data, uint8_t *drv_ctx, uint32_t tail; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = NULL; n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); if (unlikely(n == 0)) { @@ -704,7 +713,13 @@ qat_sym_dp_enqueue_auth_jobs_gen3(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) break; - enqueue_one_auth_job_gen3(ctx, cookie, req, &vec->digest[i], + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } else + job_digest = &vec->digest[i]; + + enqueue_one_auth_job_gen3(ctx, cookie, req, job_digest, &vec->auth_iv[i], ofs, (uint32_t)data_len); tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; } diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 70938ba508..e4bcfa59e7 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -598,6 +598,8 @@ qat_sym_dp_enqueue_single_auth_gen1(void *qp_data, uint8_t *drv_ctx, struct icp_qat_fw_la_bulk_req *req; int32_t data_len; uint32_t tail = dp_ctx->tail; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = digest; req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); @@ -611,8 +613,13 @@ qat_sym_dp_enqueue_single_auth_gen1(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) return -1; - enqueue_one_auth_job_gen1(ctx, req, digest, auth_iv, ofs, - (uint32_t)data_len); + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } + + enqueue_one_auth_job_gen1(ctx, req, job_digest, auth_iv, ofs, + (uint32_t)data_len); dp_ctx->tail = tail; dp_ctx->cached_enqueue++; @@ -636,6 +643,8 @@ qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx, uint32_t tail; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = NULL; n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); if (unlikely(n == 0)) { @@ -668,7 +677,14 @@ qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) break; - enqueue_one_auth_job_gen1(ctx, req, &vec->digest[i], + + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } else + job_digest = &vec->digest[i]; + + enqueue_one_auth_job_gen1(ctx, req, job_digest, &vec->auth_iv[i], ofs, (uint32_t)data_len); tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; @@ -703,6 +719,8 @@ qat_sym_dp_enqueue_single_chain_gen1(void *qp_data, uint8_t *drv_ctx, struct icp_qat_fw_la_bulk_req *req; int32_t data_len; uint32_t tail = dp_ctx->tail; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = digest; req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); @@ -715,8 +733,13 @@ qat_sym_dp_enqueue_single_chain_gen1(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) return -1; + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } + if (unlikely(enqueue_one_chain_job_gen1(ctx, req, data, n_data_vecs, - NULL, 0, cipher_iv, digest, auth_iv, ofs, + NULL, 0, cipher_iv, job_digest, auth_iv, ofs, (uint32_t)data_len))) return -1; @@ -743,6 +766,8 @@ qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx, uint32_t tail; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest; n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); if (unlikely(n == 0)) { @@ -776,10 +801,16 @@ qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) break; + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } else + job_digest = &vec->digest[i]; + if (unlikely(enqueue_one_chain_job_gen1(ctx, req, vec->src_sgl[i].vec, vec->src_sgl[i].num, NULL, 0, - &vec->iv[i], &vec->digest[i], + &vec->iv[i], job_digest, &vec->auth_iv[i], ofs, (uint32_t)data_len))) break;