From patchwork Wed Sep 21 12:50:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 116548 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AA5A1A00C4; Wed, 21 Sep 2022 14:50:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2B8F7410EE; Wed, 21 Sep 2022 14:50:45 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 517364014F for ; Wed, 21 Sep 2022 14:50:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663764643; x=1695300643; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MC25vNrzBlzB8LIj+CgQgMCKW3bcJ2nVkbRSjby+BMg=; b=KwGhMjt0w9K0fHSxSViP+49kJrGLQejNWIbGCjMwhwZfI/bc3zSJ/BY8 J3svMajIDcUhbRclWXbegA2L/pkp//IEfDNLuo8D1CvIsDAqHwSFwBcBS nRY9tiCBDr2Gl3sTNj+R3apqF4xBE/mIb4jRjvWkBcZCLfDWqSZenRZdR GuMljh9NGDgJCwAc/oyQyj6u99Nsz2f6Q3OBtnHrGU1acg0RzocjsgjlB yruniDMWlHA/JtcusmC2nh1kLM4FTQLuC97lnhnlOSILlTOuD6JIGXo4c hbPB7CdXfEAPjlkE1GYnxmH3kK1qGGLOJPdIePDbcwmRqwvpJlhyvYuFr Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="287064397" X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="287064397" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 05:50:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="619335515" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.163]) by orsmga002.jf.intel.com with ESMTP; 21 Sep 2022 05:50:41 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v3 1/5] test/crypto: fix wireless auth digest segment Date: Wed, 21 Sep 2022 12:50:32 +0000 Message-Id: <20220921125036.9104-2-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921125036.9104-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220921125036.9104-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The segment size for some tests was too small to hold the auth digest. This caused issues when using op->sym->auth.digest.data for comparisons in AESNI_MB PMD after a subsequent patch enables SGL. For example, if segment size is 2, and digest size is 4, then 4 bytes are read from op->sym->auth.digest.data, which overflows into the memory after the segment, rather than using the second segment that contains the remaining half of the digest. Fixes: 11c5485bb276 ("test/crypto: add scatter-gather tests for IP and OOP") Signed-off-by: Ciara Power Acked-by: Fan Zhang --- app/test/test_cryptodev.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 6ee4480399..5533c135b0 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -3040,6 +3040,14 @@ create_wireless_algo_auth_cipher_operation( remaining_off -= rte_pktmbuf_data_len(sgl_buf); sgl_buf = sgl_buf->next; } + + /* The last segment should be large enough to hold full digest */ + if (sgl_buf->data_len < auth_tag_len) { + rte_pktmbuf_free(sgl_buf->next); + sgl_buf->next = NULL; + rte_pktmbuf_append(sgl_buf, auth_tag_len - sgl_buf->data_len); + } + sym_op->auth.digest.data = rte_pktmbuf_mtod_offset(sgl_buf, uint8_t *, remaining_off); sym_op->auth.digest.phys_addr = rte_pktmbuf_iova_offset(sgl_buf, From patchwork Wed Sep 21 12:50:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 116549 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA5CBA00C4; Wed, 21 Sep 2022 14:50:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04352427FF; Wed, 21 Sep 2022 14:50:48 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 3AF5742670 for ; Wed, 21 Sep 2022 14:50:45 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663764645; x=1695300645; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=o4gyK39Zn1uGQKjL+n77Z8RsUZux+vjBGgAQswa0ZOA=; b=VQe+hZ4KMP+5gORpQ2/PEXcl79GEHqebU2NX8W7QUSvcVvUVv/CXsq8n 7cey5QY85wIl3cWkPcji08wGEWY3eWWahBfS9Fu7eVbAVbAgSmmz0WLRg 6Ba8ioM9U8syP6pgtsGq3XWwOcUzoCWVemHyZPWQHFunr+RUeLre1QR6l RNc8qBzFUKSDleWhqRkchLxH56unL1D36YwGvhQA0oiz5+ukBovheSZl4 fOcvaeSu9/iuIrDnX4NYXYc49O7lUHBm+eW8Tl6T7/zGbt5yh2eZWUadB 1mxt/93HXcl7D6kYPX2om3vcQlNYyk8JbSrP41ARG2KjX+nqQitw3+G+7 Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="287064399" X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="287064399" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 05:50:44 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="619335532" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.163]) by orsmga002.jf.intel.com with ESMTP; 21 Sep 2022 05:50:43 -0700 From: Ciara Power To: Fan Zhang , Pablo de Lara , Akhil Goyal Cc: dev@dpdk.org, kai.ji@intel.com, Ciara Power , slawomirx.mrozowicz@intel.com Subject: [PATCH v3 2/5] crypto/ipsec_mb: fix session creation for sessionless Date: Wed, 21 Sep 2022 12:50:33 +0000 Message-Id: <20220921125036.9104-3-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921125036.9104-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220921125036.9104-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, for a sessionless op, the session taken from the mempool contains some values previously set by a testcase that does use a session. This is due to the session object not being reset before going back into the mempool. This caused issues when multiple sessionless testcases ran, as the previously set objects were being used for the first few testcases, but subsequent testcases used empty objects, as they were being correctly reset by the sessionless testcases. To fix this, the session objects are now reset before being returned to the mempool for session testcases. In addition, rather than pulling the session object directly from the mempool for sessionless testcases, the session_create() function is now used, which sets the required values, such as nb_drivers. Fixes: c75542ae4200 ("crypto/ipsec_mb: introduce IPsec_mb framework") Fixes: b3bbd9e5f265 ("cryptodev: support device independent sessions") Cc: roy.fan.zhang@intel.com Cc: slawomirx.mrozowicz@intel.com Signed-off-by: Ciara Power Acked-by: Fan Zhang --- v3: - Modified fix to reset sessions, and ensure values are then set for sessionless testcases. V2 fix just ensured the same values in session objects were reused, as they were not being reset, which was incorrect. --- drivers/crypto/ipsec_mb/ipsec_mb_private.h | 12 ++++++++---- lib/cryptodev/rte_cryptodev.c | 1 + 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h index d074b33133..8ec23c172d 100644 --- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h +++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h @@ -415,7 +415,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op) uint32_t driver_id = ipsec_mb_get_driver_id(qp->pmd_type); struct rte_crypto_sym_op *sym_op = op->sym; uint8_t sess_type = op->sess_type; - void *_sess; + struct rte_cryptodev_sym_session *_sess; void *_sess_private_data = NULL; struct ipsec_mb_internals *pmd_data = &ipsec_mb_pmds[qp->pmd_type]; @@ -426,8 +426,12 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op) driver_id); break; case RTE_CRYPTO_OP_SESSIONLESS: - if (!qp->sess_mp || - rte_mempool_get(qp->sess_mp, (void **)&_sess)) + if (!qp->sess_mp) + return NULL; + + _sess = rte_cryptodev_sym_session_create(qp->sess_mp); + + if (!_sess) return NULL; if (!qp->sess_mp_priv || @@ -443,7 +447,7 @@ ipsec_mb_get_session_private(struct ipsec_mb_qp *qp, struct rte_crypto_op *op) sess = NULL; } - sym_op->session = (struct rte_cryptodev_sym_session *)_sess; + sym_op->session = _sess; set_sym_session_private_data(sym_op->session, driver_id, _sess_private_data); break; diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 42f3221052..af24969ed5 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2032,6 +2032,7 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess) /* Return session to mempool */ sess_mp = rte_mempool_from_obj(sess); + memset(sess, 0, rte_cryptodev_sym_get_existing_header_session_size(sess)); rte_mempool_put(sess_mp, sess); rte_cryptodev_trace_sym_session_free(sess); From patchwork Wed Sep 21 12:50:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 116550 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E3E3A00C4; Wed, 21 Sep 2022 14:50:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D579E42836; Wed, 21 Sep 2022 14:50:48 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id D1E60427FF for ; Wed, 21 Sep 2022 14:50:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663764647; x=1695300647; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vEm25qYa9+cp4JHeYJ8zBK/pxpNL197h+SE+4SRNfYg=; b=UHxwwRhifUxpuG9top0IWqfe9dbWVo89NV7N0W8C8DqVbMKQTh6zReQc vn1ZMY2Bk4dOs55fQCyL2GcbuMuUm0TLnLXJbW4WZ9Bxv8QwvK8F0FCw+ 1ExdwkCMhgy0XnQK/rvGgIPqGa+16ggQs9jw6F2gtaK8VUgRnS6A248/v 7IjdCm9n5KYXlUzHQrhfV69Y4MOmTGvr3CvSPzcUUYPcF/U3iwJvdTaYV 43ODaq/tRXUQbVMNXMLU44V18zUf4NNio6Pci0oGOphfaM8sHlnW/Wygh QGffWN70tFqZbmmfRbk0kphWPfHcCvs706pNlIlEA2uKNhEOfptqbH/np g==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="287064401" X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="287064401" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 05:50:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="619335542" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.163]) by orsmga002.jf.intel.com with ESMTP; 21 Sep 2022 05:50:45 -0700 From: Ciara Power To: Fan Zhang , Pablo de Lara Cc: dev@dpdk.org, kai.ji@intel.com, Ciara Power Subject: [PATCH v3 3/5] crypto/ipsec_mb: add remaining SGL support Date: Wed, 21 Sep 2022 12:50:34 +0000 Message-Id: <20220921125036.9104-4-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921125036.9104-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220921125036.9104-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The intel-ipsec-mb library supports SGL for GCM and ChaChaPoly algorithms using the JOB API. This support was added to AESNI_MB PMD previously, but the SGL feature flags could not be added due to no SGL support for other algorithms. This patch adds a workaround SGL approach for other algorithms using the JOB API. The segmented input buffers are copied into a linear buffer, which is passed as a single job to intel-ipsec-mb. The job is processed, and on return, the linear buffer is split into the original destination segments. Existing AESNI_MB testcases are passing with these feature flags added. Signed-off-by: Ciara Power Acked-by: Fan Zhang --- v3: - Reduced code duplication by adding a reusable function. - Changed int to uint64_t for total_len. v2: - Small improvements when copying segments to linear buffer. - Added documentation changes. --- doc/guides/cryptodevs/aesni_mb.rst | 1 - doc/guides/cryptodevs/features/aesni_mb.ini | 4 + doc/guides/rel_notes/release_22_11.rst | 5 + drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 180 ++++++++++++++++---- 4 files changed, 156 insertions(+), 34 deletions(-) diff --git a/doc/guides/cryptodevs/aesni_mb.rst b/doc/guides/cryptodevs/aesni_mb.rst index 07222ee117..59c134556f 100644 --- a/doc/guides/cryptodevs/aesni_mb.rst +++ b/doc/guides/cryptodevs/aesni_mb.rst @@ -72,7 +72,6 @@ Protocol offloads: Limitations ----------- -* Chained mbufs are not supported. * Out-of-place is not supported for combined Crypto-CRC DOCSIS security protocol. * RTE_CRYPTO_CIPHER_DES_DOCSISBPI is not supported for combined Crypto-CRC diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini index 3c648a391e..e4e965c35a 100644 --- a/doc/guides/cryptodevs/features/aesni_mb.ini +++ b/doc/guides/cryptodevs/features/aesni_mb.ini @@ -12,6 +12,10 @@ CPU AVX = Y CPU AVX2 = Y CPU AVX512 = Y CPU AESNI = Y +In Place SGL = Y +OOP SGL In SGL Out = Y +OOP SGL In LB Out = Y +OOP LB In SGL Out = Y OOP LB In LB Out = Y CPU crypto = Y Symmetric sessionless = Y diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 7fab9d6550..b3717ce9e3 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -60,6 +60,11 @@ New Features * Added AES-CCM support in lookaside protocol (IPsec) for CN9K & CN10K. * Added AES & DES DOCSIS algorithm support in lookaside crypto for CN9K. +* **Added SGL support to AESNI_MB PMD.** + + Added support for SGL to AESNI_MB PMD. Support for inplace, + OOP SGL in SGL out, OOP LB in SGL out, and OOP SGL in LB out added. + Removed Items ------------- diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index 6d5d3ce8eb..62f7d4ee5a 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -937,7 +937,7 @@ static inline uint64_t auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, uint32_t oop, const uint32_t auth_offset, const uint32_t cipher_offset, const uint32_t auth_length, - const uint32_t cipher_length) + const uint32_t cipher_length, uint8_t lb_sgl) { struct rte_mbuf *m_src, *m_dst; uint8_t *p_src, *p_dst; @@ -945,7 +945,7 @@ auth_start_offset(struct rte_crypto_op *op, struct aesni_mb_session *session, uint32_t cipher_end, auth_end; /* Only cipher then hash needs special calculation. */ - if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH) + if (!oop || session->chain_order != IMB_ORDER_CIPHER_HASH || lb_sgl) return auth_offset; m_src = op->sym->m_src; @@ -1159,6 +1159,81 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr, return 0; } +static uint64_t +sgl_linear_cipher_auth_len(IMB_JOB *job, uint64_t *auth_len) +{ + uint64_t cipher_len; + + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) + cipher_len = (job->msg_len_to_cipher_in_bits >> 3) + + (job->cipher_start_src_offset_in_bits >> 3); + else + cipher_len = job->msg_len_to_cipher_in_bytes + + job->cipher_start_src_offset_in_bytes; + + if (job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN) + *auth_len = (job->msg_len_to_hash_in_bits >> 3) + + job->hash_start_src_offset_in_bytes; + else if (job->hash_alg == IMB_AUTH_AES_GMAC) + *auth_len = job->u.GCM.aad_len_in_bytes; + else + *auth_len = job->msg_len_to_hash_in_bytes + + job->hash_start_src_offset_in_bytes; + + return RTE_MAX(*auth_len, cipher_len); +} + +static int +handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset, + struct aesni_mb_session *session) +{ + uint64_t auth_len, total_len; + uint8_t *src, *linear_buf = NULL; + int lb_offset = 0; + struct rte_mbuf *src_seg; + uint16_t src_len; + + total_len = sgl_linear_cipher_auth_len(job, &auth_len); + linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0); + if (linear_buf == NULL) { + IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n"); + return -1; + } + + for (src_seg = op->sym->m_src; (src_seg != NULL) && + (total_len - lb_offset > 0); + src_seg = src_seg->next) { + src = rte_pktmbuf_mtod(src_seg, uint8_t *); + src_len = RTE_MIN(src_seg->data_len, total_len - lb_offset); + rte_memcpy(linear_buf + lb_offset, src, src_len); + lb_offset += src_len; + } + + job->src = linear_buf; + job->dst = linear_buf + dst_offset; + job->user_data2 = linear_buf; + + if (job->hash_alg == IMB_AUTH_AES_GMAC) + job->u.GCM.aad = linear_buf; + + if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) + job->auth_tag_output = linear_buf + lb_offset; + else + job->auth_tag_output = linear_buf + auth_len; + + return 0; +} + +static inline int +imb_lib_support_sgl_algo(IMB_CIPHER_MODE alg) +{ + if (alg == IMB_CIPHER_CHACHA20_POLY1305 + || alg == IMB_CIPHER_GCM) + return 1; + return 0; +} /** * Process a crypto operation and complete a IMB_JOB job structure for @@ -1171,7 +1246,8 @@ handle_aead_sgl_job(IMB_JOB *job, IMB_MGR *mb_mgr, * * @return * - 0 on success, the IMB_JOB will be filled - * - -1 if invalid session, IMB_JOB will not be filled + * - -1 if invalid session or errors allocationg SGL linear buffer, + * IMB_JOB will not be filled */ static inline int set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, @@ -1191,6 +1267,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, uint32_t total_len; IMB_JOB base_job; uint8_t sgl = 0; + uint8_t lb_sgl = 0; int ret; session = ipsec_mb_get_session_private(qp, op); @@ -1199,18 +1276,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, return -1; } - if (op->sym->m_src->nb_segs > 1) { - if (session->cipher.mode != IMB_CIPHER_GCM - && session->cipher.mode != - IMB_CIPHER_CHACHA20_POLY1305) { - op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - IPSEC_MB_LOG(ERR, "Device only supports SGL for AES-GCM" - " or CHACHA20_POLY1305 algorithms."); - return -1; - } - sgl = 1; - } - /* Set crypto operation */ job->chain_order = session->chain_order; @@ -1233,6 +1298,26 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->dec_keys = session->cipher.expanded_aes_keys.decode; } + if (!op->sym->m_dst) { + /* in-place operation */ + m_dst = m_src; + oop = 0; + } else if (op->sym->m_dst == op->sym->m_src) { + /* in-place operation */ + m_dst = m_src; + oop = 0; + } else { + /* out-of-place operation */ + m_dst = op->sym->m_dst; + oop = 1; + } + + if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) { + sgl = 1; + if (!imb_lib_support_sgl_algo(session->cipher.mode)) + lb_sgl = 1; + } + switch (job->hash_alg) { case IMB_AUTH_AES_XCBC: job->u.XCBC._k1_expanded = session->auth.xcbc.k1_expanded; @@ -1331,20 +1416,6 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, m_offset = 0; } - if (!op->sym->m_dst) { - /* in-place operation */ - m_dst = m_src; - oop = 0; - } else if (op->sym->m_dst == op->sym->m_src) { - /* in-place operation */ - m_dst = m_src; - oop = 0; - } else { - /* out-of-place operation */ - m_dst = op->sym->m_dst; - oop = 1; - } - /* Set digest output location */ if (job->hash_alg != IMB_AUTH_NULL && session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { @@ -1435,7 +1506,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->hash_start_src_offset_in_bytes = auth_start_offset(op, session, oop, auth_off_in_bytes, ciph_off_in_bytes, auth_len_in_bytes, - ciph_len_in_bytes); + ciph_len_in_bytes, lb_sgl); job->msg_len_to_hash_in_bits = op->sym->auth.data.length; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1452,7 +1523,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->hash_start_src_offset_in_bytes = auth_start_offset(op, session, oop, auth_off_in_bytes, ciph_off_in_bytes, auth_len_in_bytes, - ciph_len_in_bytes); + ciph_len_in_bytes, lb_sgl); job->msg_len_to_hash_in_bytes = auth_len_in_bytes; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1464,7 +1535,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, session, oop, op->sym->auth.data.offset, op->sym->cipher.data.offset, op->sym->auth.data.length, - op->sym->cipher.data.length); + op->sym->cipher.data.length, lb_sgl); job->msg_len_to_hash_in_bytes = op->sym->auth.data.length; job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, @@ -1525,6 +1596,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->user_data = op; if (sgl) { + + if (lb_sgl) + return handle_sgl_linear(job, op, m_offset, session); + base_job = *job; job->sgl_state = IMB_SGL_INIT; job = IMB_SUBMIT_JOB(mb_mgr); @@ -1695,6 +1770,31 @@ generate_digest(IMB_JOB *job, struct rte_crypto_op *op, sess->auth.req_digest_len); } +static void +post_process_sgl_linear(struct rte_crypto_op *op, IMB_JOB *job, + struct aesni_mb_session *sess, uint8_t *linear_buf) +{ + + int lb_offset = 0; + struct rte_mbuf *m_dst = op->sym->m_dst == NULL ? + op->sym->m_src : op->sym->m_dst; + uint16_t total_len, dst_len; + uint64_t auth_len; + uint8_t *dst; + + total_len = sgl_linear_cipher_auth_len(job, &auth_len); + + if (sess->auth.operation != RTE_CRYPTO_AUTH_OP_VERIFY) + total_len += job->auth_tag_output_len_in_bytes; + + for (; (m_dst != NULL) && (total_len - lb_offset > 0); m_dst = m_dst->next) { + dst = rte_pktmbuf_mtod(m_dst, uint8_t *); + dst_len = RTE_MIN(m_dst->data_len, total_len - lb_offset); + rte_memcpy(dst, linear_buf + lb_offset, dst_len); + lb_offset += dst_len; + } +} + /** * Process a completed job and return rte_mbuf which job processed * @@ -1712,6 +1812,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) struct aesni_mb_session *sess = NULL; uint32_t driver_id = ipsec_mb_get_driver_id( IPSEC_MB_PMD_TYPE_AESNI_MB); + uint8_t *linear_buf = NULL; #ifdef AESNI_MB_DOCSIS_SEC_ENABLED uint8_t is_docsis_sec = 0; @@ -1740,6 +1841,14 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) case IMB_STATUS_COMPLETED: op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + if ((op->sym->m_src->nb_segs > 1 || + (op->sym->m_dst != NULL && + op->sym->m_dst->nb_segs > 1)) && + !imb_lib_support_sgl_algo(sess->cipher.mode)) { + linear_buf = (uint8_t *) job->user_data2; + post_process_sgl_linear(op, job, sess, linear_buf); + } + if (job->hash_alg == IMB_AUTH_NULL) break; @@ -1766,6 +1875,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) default: op->status = RTE_CRYPTO_OP_STATUS_ERROR; } + rte_free(linear_buf); } /* Free session if a session-less crypto op */ @@ -2252,7 +2362,11 @@ RTE_INIT(ipsec_mb_register_aesni_mb) RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT | RTE_CRYPTODEV_FF_SYM_CPU_CRYPTO | RTE_CRYPTODEV_FF_NON_BYTE_ALIGNED_DATA | - RTE_CRYPTODEV_FF_SYM_SESSIONLESS; + RTE_CRYPTODEV_FF_SYM_SESSIONLESS | + RTE_CRYPTODEV_FF_IN_PLACE_SGL | + RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | + RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; aesni_mb_data->internals_priv_size = 0; aesni_mb_data->ops = &aesni_mb_pmd_ops; From patchwork Wed Sep 21 12:50:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 116551 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 193F9A00C4; Wed, 21 Sep 2022 14:51:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D720A42B6D; Wed, 21 Sep 2022 14:50:50 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 7AA5D42825 for ; Wed, 21 Sep 2022 14:50:48 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663764648; x=1695300648; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=v3G6rUtpzCsjwZkdoIXvxaewd6ufZrakDT8/888d9aY=; b=gR07OpEK1CLwjkCMDbvcaA2gbQRpLQfcj/KLiwnZhu9F68XcmXJqLbbK RX4KbeyXz9dM2gU9EdHnFGPa1Q1mABMBdKpg1nKkzo4xiwsBOhEB/KWAS cbfM5O8YKHaHUEGzpqA68hxmw06gQDxTPRN84531VnhWp15CilzDz947B 5g1VjJAYxDHZ0DpwKTrHkINjfmAJWSeQn4K860TGIQw51Tc8mJIXLFYTH usAf09CMTBou50eisDft3tc6i37ZQ1o3p3bcaL73qe2M8cI42T37vqrPs ZsJHNkfcaK8CFxORW87B/bLYIxfIzF76cBePP9pWE5FuRE0QZg8oiJ1uT g==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="287064406" X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="287064406" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 05:50:48 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="619335550" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.163]) by orsmga002.jf.intel.com with ESMTP; 21 Sep 2022 05:50:46 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v3 4/5] test/crypto: add OOP snow3g SGL tests Date: Wed, 21 Sep 2022 12:50:35 +0000 Message-Id: <20220921125036.9104-5-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921125036.9104-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220921125036.9104-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org More tests are added to test variations of OOP SGL for snow3g. This includes LB_IN_SGL_OUT and SGL_IN_LB_OUT. Signed-off-by: Ciara Power Acked-by: Fan Zhang --- app/test/test_cryptodev.c | 48 +++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 5533c135b0..a48c0abae6 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -4347,7 +4347,8 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata) } static int -test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) +test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata, + uint8_t sgl_in, uint8_t sgl_out) { struct crypto_testsuite_params *ts_params = &testsuite_params; struct crypto_unittest_params *ut_params = &unittest_params; @@ -4378,9 +4379,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) uint64_t feat_flags = dev_info.feature_flags; - if (!(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) { - printf("Device doesn't support out-of-place scatter-gather " - "in both input and output mbufs. " + if (((sgl_in && sgl_out) && !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT)) + || ((!sgl_in && sgl_out) && + !(feat_flags & RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT)) + || ((sgl_in && !sgl_out) && + !(feat_flags & RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT))) { + printf("Device doesn't support out-of-place scatter gather type. " "Test Skipped.\n"); return TEST_SKIPPED; } @@ -4405,10 +4409,21 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata) /* the algorithms block size */ plaintext_pad_len = RTE_ALIGN_CEIL(plaintext_len, 16); - ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, - plaintext_pad_len, 10, 0); - ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool, - plaintext_pad_len, 3, 0); + if (sgl_in) + ut_params->ibuf = create_segmented_mbuf(ts_params->mbuf_pool, + plaintext_pad_len, 10, 0); + else { + ut_params->ibuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + rte_pktmbuf_append(ut_params->ibuf, plaintext_pad_len); + } + + if (sgl_out) + ut_params->obuf = create_segmented_mbuf(ts_params->mbuf_pool, + plaintext_pad_len, 3, 0); + else { + ut_params->obuf = rte_pktmbuf_alloc(ts_params->mbuf_pool); + rte_pktmbuf_append(ut_params->obuf, plaintext_pad_len); + } TEST_ASSERT_NOT_NULL(ut_params->ibuf, "Failed to allocate input buffer in mempool"); @@ -6762,9 +6777,20 @@ test_snow3g_encryption_test_case_1_oop(void) static int test_snow3g_encryption_test_case_1_oop_sgl(void) { - return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1); + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 1); +} + +static int +test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out(void) +{ + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 0, 1); } +static int +test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out(void) +{ + return test_snow3g_encryption_oop_sgl(&snow3g_test_case_1, 1, 0); +} static int test_snow3g_encryption_test_case_1_offset_oop(void) @@ -15993,6 +16019,10 @@ static struct unit_test_suite cryptodev_snow3g_testsuite = { test_snow3g_encryption_test_case_1_oop), TEST_CASE_ST(ut_setup, ut_teardown, test_snow3g_encryption_test_case_1_oop_sgl), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1_oop_lb_in_sgl_out), + TEST_CASE_ST(ut_setup, ut_teardown, + test_snow3g_encryption_test_case_1_oop_sgl_in_lb_out), TEST_CASE_ST(ut_setup, ut_teardown, test_snow3g_encryption_test_case_1_offset_oop), TEST_CASE_ST(ut_setup, ut_teardown, From patchwork Wed Sep 21 12:50:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 116552 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BCED8A00C4; Wed, 21 Sep 2022 14:51:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A0F942B6C; Wed, 21 Sep 2022 14:50:53 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 00DF242B6F for ; Wed, 21 Sep 2022 14:50:50 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663764651; x=1695300651; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZW/75um0P763rpbOrnSyIPtWwe8j+5LKpXM6oCxef9c=; b=mjSBeqQ2hPnShg7qyySbV91SeSNG89e9Lo6NO65CuI7OpkKEWm6xI/4x c2c8ep60Vd/6AUS4Jsa+19/KCNTGDuXOzVdPKWc9qI+3IrD3ZWh8zMY1A pXosev0L/tulqvg1cnuuYKcxByEytBqoiuGAGwWrZ6YlUyA3zTsrt1ynX 1eyD+Nk6cgvl/CC0ekiU9hw/Ip5RwlAQLp5EHKO/4bG5nGN6IbO8joC+e gauP5HkXXzbjdyaOvlOnzIn0p0JvpN+2M2Md6zb7ocy82nC5VM7HPQrmH 2veA9G4QYG1sYFAF+Wc4RAHzJMd1guCHg9P/7u/a8H60cXehaMVWjfdbj A==; X-IronPort-AV: E=McAfee;i="6500,9779,10477"; a="287064415" X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="287064415" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Sep 2022 05:50:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,333,1654585200"; d="scan'208";a="619335571" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.163]) by orsmga002.jf.intel.com with ESMTP; 21 Sep 2022 05:50:48 -0700 From: Ciara Power To: Akhil Goyal , Fan Zhang , Yipeng Wang , Sameh Gobriel , Bruce Richardson , Vladimir Medvedkin Cc: dev@dpdk.org, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v3 5/5] test/crypto: add remaining blockcipher SGL tests Date: Wed, 21 Sep 2022 12:50:36 +0000 Message-Id: <20220921125036.9104-6-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220921125036.9104-1-ciara.power@intel.com> References: <20220812132334.75707-1-ciara.power@intel.com> <20220921125036.9104-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current blockcipher test function only has support for two types of SGL test, INPLACE or OOP_SGL_IN_LB_OUT. These types are hardcoded into the function, with the number of segments always set to 3. To ensure all SGL types are tested, blockcipher test vectors now have fields to specify SGL type, and the number of segments. If these fields are missing, the previous defaults are used, either INPLACE or OOP_SGL_IN_LB_OUT, with 3 segments. Some AES and Hash vectors are modified to use these new fields, and new AES tests are added to test the SGL types that were not previously being tested. Signed-off-by: Ciara Power Acked-by: Fan Zhang --- app/test/test_cryptodev_aes_test_vectors.h | 345 +++++++++++++++++--- app/test/test_cryptodev_blockcipher.c | 50 +-- app/test/test_cryptodev_blockcipher.h | 2 + app/test/test_cryptodev_hash_test_vectors.h | 8 +- 4 files changed, 335 insertions(+), 70 deletions(-) diff --git a/app/test/test_cryptodev_aes_test_vectors.h b/app/test/test_cryptodev_aes_test_vectors.h index a797af1b00..2c1875d3d9 100644 --- a/app/test/test_cryptodev_aes_test_vectors.h +++ b/app/test/test_cryptodev_aes_test_vectors.h @@ -4163,12 +4163,44 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " - "Scatter Gather", + "Scatter Gather (Inplace)", + .test_data = &aes_test_data_2, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " + "Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_2, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " + "Scatter Gather OOP (LB in SGL out)", .test_data = &aes_test_data_2, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 }, + { + .test_descr = "AES-192-CTR XCBC Decryption Digest Verify " + "Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_2, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 + }, + { .test_descr = "AES-256-CTR HMAC-SHA1 Encryption Digest", .test_data = &aes_test_data_3, @@ -4193,11 +4225,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " - "Scatter Gather", + "Scatter Gather (Inplace)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP 16 segs (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 16 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP (LB in SGL out)", .test_data = &aes_test_data_4, .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " + "Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4207,10 +4280,52 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " - "Verify Scatter Gather", + "Verify Scatter Gather (Inplace)", .test_data = &aes_test_data_4, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP 16 segs (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 16 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP (LB in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " + "Verify Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4255,12 +4370,46 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " - "Scatter Gather Sessionless", + "Scatter Gather Sessionless (Inplace)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | + BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " + "Scatter Gather Sessionless OOP (SGL in SGL out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | + BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " + "Scatter Gather Sessionless OOP (LB in SGL out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | + BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Encryption Digest " + "Scatter Gather Sessionless OOP (SGL in LB out)", .test_data = &aes_test_data_6, .op_mask = BLOCKCIPHER_TEST_OP_ENC_AUTH_GEN, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " @@ -4270,11 +4419,42 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { }, { .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " - "Verify Scatter Gather", + "Verify Scatter Gather (Inplace)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 2 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " + "Verify Scatter Gather OOP (SGL in SGL out)", .test_data = &aes_test_data_6, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " + "Verify Scatter Gather OOP (LB in SGL out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC HMAC-SHA512 Decryption Digest " + "Verify Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_6, + .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY_DEC, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC XCBC Encryption Digest", @@ -4358,6 +4538,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN_ENC, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest " @@ -4382,6 +4564,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4397,6 +4581,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_DEC_AUTH_VERIFY, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4421,6 +4607,8 @@ static const struct blockcipher_test_case aes_chain_test_cases[] = { .feature_mask = BLOCKCIPHER_TEST_FEATURE_SESSIONLESS | BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_DIGEST_ENCRYPTED, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "AES-128-CBC HMAC-SHA1 Decryption Digest " @@ -4504,6 +4692,41 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { .test_data = &aes_test_data_4, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather (Inplace)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather OOP (LB in SGL out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-128-CBC Encryption Scatter gather OOP (SGL in LB out)", + .test_data = &aes_test_data_4, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 + }, { .test_descr = "AES-128-CBC Decryption", .test_data = &aes_test_data_4, @@ -4515,11 +4738,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, }, { - .test_descr = "AES-192-CBC Encryption Scatter gather", + .test_descr = "AES-192-CBC Encryption Scatter gather (Inplace)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in SGL out)", .test_data = &aes_test_data_10, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Encryption Scatter gather OOP (LB in SGL out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Encryption Scatter gather OOP (SGL in LB out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-192-CBC Decryption", @@ -4527,10 +4778,39 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, }, { - .test_descr = "AES-192-CBC Decryption Scatter Gather", + .test_descr = "AES-192-CBC Decryption Scatter Gather (Inplace)", .test_data = &aes_test_data_10, .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in SGL out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (LB in SGL out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT, + .sgl_segs = 3 + }, + { + .test_descr = "AES-192-CBC Decryption Scatter Gather OOP (SGL in LB out)", + .test_data = &aes_test_data_10, + .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG | + BLOCKCIPHER_TEST_FEATURE_OOP, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-CBC Encryption", @@ -4689,67 +4969,42 @@ static const struct blockcipher_test_case aes_cipheronly_test_cases[] = { }, { .test_descr = "AES-256-XTS Encryption (512-byte plaintext" - " Dataunit 512) Scater gather OOP", + " Dataunit 512) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, + .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-XTS Decryption (512-byte plaintext" - " Dataunit 512) Scater gather OOP", + " Dataunit 512) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_512, .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Encryption (512-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Decryption (512-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_512_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-XTS Encryption (4096-byte plaintext" - " Dataunit 4096) Scater gather OOP", + " Dataunit 4096) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096, .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "AES-256-XTS Decryption (4096-byte plaintext" - " Dataunit 4096) Scater gather OOP", + " Dataunit 4096) Scatter gather OOP (SGL in LB out)", .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_4096, .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Encryption (4096-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_ENCRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, - }, - { - .test_descr = "AES-256-XTS Decryption (4096-byte plaintext" - " Dataunit 0) Scater gather OOP", - .test_data = &aes_test_data_xts_wrapped_key_48_pt_4096_du_0, - .op_mask = BLOCKCIPHER_TEST_OP_DECRYPT, - .feature_mask = BLOCKCIPHER_TEST_FEATURE_OOP | - BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT, + .sgl_segs = 3 }, { .test_descr = "cipher-only - NULL algo - x8 - encryption", diff --git a/app/test/test_cryptodev_blockcipher.c b/app/test/test_cryptodev_blockcipher.c index b5813b956f..f1ef0b606f 100644 --- a/app/test/test_cryptodev_blockcipher.c +++ b/app/test/test_cryptodev_blockcipher.c @@ -96,7 +96,9 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, uint8_t tmp_dst_buf[MBUF_SIZE]; uint32_t pad_len; - int nb_segs = 1; + int nb_segs_in = 1; + int nb_segs_out = 1; + uint64_t sgl_type = t->sgl_flag; uint32_t nb_iterates = 0; rte_cryptodev_info_get(dev_id, &dev_info); @@ -121,30 +123,31 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } } if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_SG) { - uint64_t oop_flag = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; + if (sgl_type == 0) { + if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) + sgl_type = RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT; + else + sgl_type = RTE_CRYPTODEV_FF_IN_PLACE_SGL; + } - if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { - if (!(feat_flags & oop_flag)) { - printf("Device doesn't support out-of-place " - "scatter-gather in input mbuf. " - "Test Skipped.\n"); - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "SKIPPED"); - return TEST_SKIPPED; - } - } else { - if (!(feat_flags & RTE_CRYPTODEV_FF_IN_PLACE_SGL)) { - printf("Device doesn't support in-place " - "scatter-gather mbufs. " - "Test Skipped.\n"); - snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, - "SKIPPED"); - return TEST_SKIPPED; - } + if (!(feat_flags & sgl_type)) { + printf("Device doesn't support scatter-gather type." + " Test Skipped.\n"); + snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, + "SKIPPED"); + return TEST_SKIPPED; } - nb_segs = 3; + if (sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT || + sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT || + sgl_type == RTE_CRYPTODEV_FF_IN_PLACE_SGL) + nb_segs_in = t->sgl_segs == 0 ? 3 : t->sgl_segs; + + if (sgl_type == RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT || + sgl_type == RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT) + nb_segs_out = t->sgl_segs == 0 ? 3 : t->sgl_segs; } + if (!!(feat_flags & RTE_CRYPTODEV_FF_CIPHER_WRAPPED_KEY) ^ tdata->wrapped_key) { snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, @@ -207,7 +210,7 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, /* for contiguous mbuf, nb_segs is 1 */ ibuf = create_segmented_mbuf(mbuf_pool, - tdata->ciphertext.len, nb_segs, src_pattern); + tdata->ciphertext.len, nb_segs_in, src_pattern); if (ibuf == NULL) { snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u FAILED: %s", @@ -256,7 +259,8 @@ test_blockcipher_one_case(const struct blockcipher_test_case *t, } if (t->feature_mask & BLOCKCIPHER_TEST_FEATURE_OOP) { - obuf = rte_pktmbuf_alloc(mbuf_pool); + obuf = create_segmented_mbuf(mbuf_pool, + tdata->ciphertext.len, nb_segs_out, dst_pattern); if (!obuf) { snprintf(test_msg, BLOCKCIPHER_TEST_MSG_LEN, "line %u " "FAILED: %s", __LINE__, diff --git a/app/test/test_cryptodev_blockcipher.h b/app/test/test_cryptodev_blockcipher.h index 84f5d57787..bad93a5ec1 100644 --- a/app/test/test_cryptodev_blockcipher.h +++ b/app/test/test_cryptodev_blockcipher.h @@ -57,6 +57,8 @@ struct blockcipher_test_case { const struct blockcipher_test_data *test_data; uint8_t op_mask; /* operation mask */ uint8_t feature_mask; + uint64_t sgl_flag; + uint8_t sgl_segs; }; struct blockcipher_test_data { diff --git a/app/test/test_cryptodev_hash_test_vectors.h b/app/test/test_cryptodev_hash_test_vectors.h index 5bd7858de4..62602310b2 100644 --- a/app/test/test_cryptodev_hash_test_vectors.h +++ b/app/test/test_cryptodev_hash_test_vectors.h @@ -467,10 +467,12 @@ static const struct blockcipher_test_case hash_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN, }, { - .test_descr = "HMAC-SHA1 Digest Scatter Gather", + .test_descr = "HMAC-SHA1 Digest Scatter Gather (Inplace)", .test_data = &hmac_sha1_test_vector, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_GEN, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "HMAC-SHA1 Digest Verify", @@ -478,10 +480,12 @@ static const struct blockcipher_test_case hash_test_cases[] = { .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY, }, { - .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather", + .test_descr = "HMAC-SHA1 Digest Verify Scatter Gather (Inplace)", .test_data = &hmac_sha1_test_vector, .op_mask = BLOCKCIPHER_TEST_OP_AUTH_VERIFY, .feature_mask = BLOCKCIPHER_TEST_FEATURE_SG, + .sgl_flag = RTE_CRYPTODEV_FF_IN_PLACE_SGL, + .sgl_segs = 3 }, { .test_descr = "SHA224 Digest",