From patchwork Thu Sep 7 16:12:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Dooley X-Patchwork-Id: 131234 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94F8742538; Thu, 7 Sep 2023 18:13:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ECA50402DF; Thu, 7 Sep 2023 18:13:25 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 2A745400EF for ; Thu, 7 Sep 2023 18:13:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694103203; x=1725639203; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vIuj3iSqKMgxZ3PJatrHIDgiCFh4lZelmQNgKurjllc=; b=VcL+CxJ21cqMFFStMBGbzJyItL3Jb1ECEn0HNg0dLp4NUxCFlxpRgEZN jUHzz4L9B3c6EiyOdsWmU6feBi0qo3d8rdy0hWq+GZ8dZ6U77XqoIaU1U kyfVzcWXhSUstdY6g7Xp5yP7fjXUwschltqpmmpqETJVJQGXHigIgG7mN 5eXrH3HxvXpQI6GmGNzQiBy9rDI80uiQ3iPaKBAjzI/MSfWgbgVy9sn4l F2+hc5ZbEeE86Yz+Q/8wrWbXWhl1ussWR/T+RMrHqbXq7TXUT1rWPZiTS uJt98L1q8iZIENMaR/Y01/Duh2oo1tWj+0FK2fekvQP4lPhzExOlep560 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="408395798" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="408395798" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2023 09:13:04 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="771337672" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="771337672" Received: from silpixa00400883.ir.intel.com ([10.243.22.155]) by orsmga008.jf.intel.com with ESMTP; 07 Sep 2023 09:13:03 -0700 From: Brian Dooley To: Kai Ji , Pablo de Lara Cc: dev@dpdk.org, gakhil@marvell.com, Brian Dooley , Ciara Power Subject: [PATCH v7 1/3] crypto/ipsec_mb: add digest encrypted feature Date: Thu, 7 Sep 2023 16:12:56 +0000 Message-Id: <20230907161258.2288031-2-brian.dooley@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230907161258.2288031-1-brian.dooley@intel.com> References: <20230907102614.2269913-2-brian.dooley@intel.com> <20230907161258.2288031-1-brian.dooley@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org AESNI_MB PMD does not support Digest Encrypted. This patch adds a check and support for this feature. Acked-by: Ciara Power Signed-off-by: Brian Dooley --- v2: Fixed CHECKPATCH warning v3: Add Digest encrypted support to docs v4: Add comments and small refactor v5: Fix checkpatch warnings v6: Add skipping tests for synchronous crypto v7: Separate synchronous fix into separate commit --- doc/guides/cryptodevs/features/aesni_mb.ini | 1 + drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 109 +++++++++++++++++++- 2 files changed, 105 insertions(+), 5 deletions(-) diff --git a/doc/guides/cryptodevs/features/aesni_mb.ini b/doc/guides/cryptodevs/features/aesni_mb.ini index e4e965c35a..8df5fa2c85 100644 --- a/doc/guides/cryptodevs/features/aesni_mb.ini +++ b/doc/guides/cryptodevs/features/aesni_mb.ini @@ -20,6 +20,7 @@ OOP LB In LB Out = Y CPU crypto = Y Symmetric sessionless = Y Non-Byte aligned data = Y +Digest encrypted = Y ; ; Supported crypto algorithms of the 'aesni_mb' crypto driver. diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index 9e298023d7..7f61065939 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -1438,6 +1438,54 @@ set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl, return 0; } +/** Check if conditions are met for digest-appended operations */ +static uint8_t * +aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job, + uint32_t oop) +{ + unsigned int auth_size, cipher_size; + uint8_t *end_cipher; + uint8_t *start_cipher; + + if (job->cipher_mode == IMB_CIPHER_NULL) + return NULL; + + if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3 || + job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) { + cipher_size = (op->sym->cipher.data.offset >> 3) + + (op->sym->cipher.data.length >> 3); + } else { + cipher_size = (op->sym->cipher.data.offset) + + (op->sym->cipher.data.length); + } + if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN || + job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_KASUMI_UIA1 || + job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) { + auth_size = (op->sym->auth.data.offset >> 3) + + (op->sym->auth.data.length >> 3); + } else { + auth_size = (op->sym->auth.data.offset) + + (op->sym->auth.data.length); + } + + if (!oop) { + end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, cipher_size); + start_cipher = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *); + } else { + end_cipher = rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, cipher_size); + start_cipher = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *); + } + + if (start_cipher < op->sym->auth.digest.data && + op->sym->auth.digest.data < end_cipher) { + return rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, auth_size); + } else { + return NULL; + } +} + /** * Process a crypto operation and complete a IMB_JOB job structure for * submission to the multi buffer library for processing. @@ -1580,9 +1628,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, } else { if (aead) job->auth_tag_output = op->sym->aead.digest.data; - else - job->auth_tag_output = op->sym->auth.digest.data; - + else { + job->auth_tag_output = aesni_mb_digest_appended_in_src(op, job, oop); + if (job->auth_tag_output == NULL) { + job->auth_tag_output = op->sym->auth.digest.data; + } + } if (session->auth.req_digest_len != job->auth_tag_output_len_in_bytes) { job->auth_tag_output = @@ -1917,6 +1968,7 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) struct aesni_mb_session *sess = NULL; uint8_t *linear_buf = NULL; int sgl = 0; + uint8_t oop = 0; uint8_t is_docsis_sec = 0; if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { @@ -1962,8 +2014,54 @@ post_process_mb_job(struct ipsec_mb_qp *qp, IMB_JOB *job) op->sym->auth.digest.data, sess->auth.req_digest_len, &op->status); - } else + } else { + if (!op->sym->m_dst || op->sym->m_dst == op->sym->m_src) { + /* in-place operation */ + oop = 0; + } else { /* out-of-place operation */ + oop = 1; + } + + /* Enable digest check */ + if (op->sym->m_src->nb_segs == 1 && op->sym->m_dst != NULL + && !is_aead_algo(job->hash_alg, sess->template_job.cipher_mode) && + aesni_mb_digest_appended_in_src(op, job, oop) != NULL) { + unsigned int auth_size, cipher_size; + int unencrypted_bytes = 0; + if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN || + job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN || + job->cipher_mode == IMB_CIPHER_ZUC_EEA3) { + cipher_size = (op->sym->cipher.data.offset >> 3) + + (op->sym->cipher.data.length >> 3); + } else { + cipher_size = (op->sym->cipher.data.offset) + + (op->sym->cipher.data.length); + } + if (job->hash_alg == IMB_AUTH_ZUC_EIA3_BITLEN || + job->hash_alg == IMB_AUTH_SNOW3G_UIA2_BITLEN || + job->hash_alg == IMB_AUTH_KASUMI_UIA1 || + job->hash_alg == IMB_AUTH_ZUC256_EIA3_BITLEN) { + auth_size = (op->sym->auth.data.offset >> 3) + + (op->sym->auth.data.length >> 3); + } else { + auth_size = (op->sym->auth.data.offset) + + (op->sym->auth.data.length); + } + /* Check for unencrypted bytes in partial digest cases */ + if (job->cipher_mode != IMB_CIPHER_NULL) { + unencrypted_bytes = auth_size + + job->auth_tag_output_len_in_bytes - cipher_size; + } + if (unencrypted_bytes > 0) + rte_memcpy( + rte_pktmbuf_mtod_offset(op->sym->m_dst, uint8_t *, + cipher_size), + rte_pktmbuf_mtod_offset(op->sym->m_src, uint8_t *, + cipher_size), + unencrypted_bytes); + } generate_digest(job, op, sess); + } break; default: op->status = RTE_CRYPTO_OP_STATUS_ERROR; @@ -2555,7 +2653,8 @@ RTE_INIT(ipsec_mb_register_aesni_mb) RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT | RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT | - RTE_CRYPTODEV_FF_SECURITY; + RTE_CRYPTODEV_FF_SECURITY | + RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED; aesni_mb_data->internals_priv_size = 0; aesni_mb_data->ops = &aesni_mb_pmd_ops; From patchwork Thu Sep 7 16:12:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Dooley X-Patchwork-Id: 131235 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42C8442538; Thu, 7 Sep 2023 18:13:35 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 17BF140648; Thu, 7 Sep 2023 18:13:27 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 1CC97402D6 for ; Thu, 7 Sep 2023 18:13:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694103204; x=1725639204; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jXZEm9rGLep1hUHLiE1ka12I3gtMxf9B+I9HO5rDFfs=; b=WfYv2Mdsunwq1q/W/m6NxWiHIozeMimcaYTudH40wUF2Yzw3GSK0MxkC lUucNabwESVnMwNaT9LI3zlgqaevRarwqQ7O1BXkK4FjY8uqZgo0drgc+ ZN/wH2rOJMQ7FAonjO6LXBVtqry2BnPs2SSACDRAQvm97j3p0rFuS6Mre HJVONG+o43HwHnQHgWOVGaQxI0oX52y8YlM2qI9VKJB77pVrNthM59buv +d9p/PgtZ81ZGFu4+EdpSvc9eSE8TRJRQuSfmXvuKeY9zPpMI6VOjWxyj vLEplz4WT2X50Yr+JQIX7+G+fS8NDr9SlGEooFMBg8XRFH0VxTqd8J/RJ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="408395806" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="408395806" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2023 09:13:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="771337675" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="771337675" Received: from silpixa00400883.ir.intel.com ([10.243.22.155]) by orsmga008.jf.intel.com with ESMTP; 07 Sep 2023 09:13:05 -0700 From: Brian Dooley To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, Brian Dooley , adamx.dybkowski@intel.com, Ciara Power Subject: [PATCH v7 2/3] test/crypto: fix IV in some vectors Date: Thu, 7 Sep 2023 16:12:57 +0000 Message-Id: <20230907161258.2288031-3-brian.dooley@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230907161258.2288031-1-brian.dooley@intel.com> References: <20230907102614.2269913-2-brian.dooley@intel.com> <20230907161258.2288031-1-brian.dooley@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org SNOW3G and ZUC algorithms require non-zero length IVs. Fixes: c6c267a00a92 ("test/crypto: add mixed encypted-digest") Cc: adamx.dybkowski@intel.com Acked-by: Ciara Power Signed-off-by: Brian Dooley --- app/test/test_cryptodev_mixed_test_vectors.h | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/app/test/test_cryptodev_mixed_test_vectors.h b/app/test/test_cryptodev_mixed_test_vectors.h index 161e2d905f..9c4313185e 100644 --- a/app/test/test_cryptodev_mixed_test_vectors.h +++ b/app/test/test_cryptodev_mixed_test_vectors.h @@ -478,8 +478,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_snow_test_case_1 = { }, .cipher_iv = { .data = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, - .len = 0, + .len = 16, }, .cipher = { .len_bits = 516 << 3, @@ -917,8 +919,10 @@ struct mixed_cipher_auth_test_data auth_aes_cmac_cipher_zuc_test_case_1 = { }, .cipher_iv = { .data = { + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, - .len = 0, + .len = 16, }, .cipher = { .len_bits = 516 << 3, From patchwork Thu Sep 7 16:12:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Dooley X-Patchwork-Id: 131236 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C72E842538; Thu, 7 Sep 2023 18:13:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5A5DB4067B; Thu, 7 Sep 2023 18:13:28 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id B462B400EF for ; Thu, 7 Sep 2023 18:13:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694103204; x=1725639204; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0iL5P6GZcugmA6BdIop/MsOTU5ppJgqg1YAdwAvX+ok=; b=nuP0xZlO7MVun3lWGiOlkCbIX2XQSUTd7fPL84T7jMIrWb87jf65Xg7R NqZ9kdcRsi9zDKWhlWXSrUZuZ6B6vNeOfkaVvaBXMo3XIbOwnkY/qK04o rF/nIVBD60CtA6QWLsfSisUFiTgi+hifstaIfxKV0Sw0ZApRL0Z95+bCT GUPpgWTGhFcQd1p4eYbEtjTje8Lxj0UnlXQqTtqJHbLIdgX9H0X6MC8fN UvUfwZA1So9JmXsKweLSC50q5R/Xpe8bshalqFtJf5wo/Yb9jXSyRczSF OZdJD0PxsFk/k6MM8EpwVZWCG86+OsxGE2CtMYrV7IHe3iG8TCBBW06ka Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="408395813" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="408395813" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Sep 2023 09:13:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10826"; a="771337679" X-IronPort-AV: E=Sophos;i="6.02,235,1688454000"; d="scan'208";a="771337679" Received: from silpixa00400883.ir.intel.com ([10.243.22.155]) by orsmga008.jf.intel.com with ESMTP; 07 Sep 2023 09:13:06 -0700 From: Brian Dooley To: Akhil Goyal , Fan Zhang Cc: dev@dpdk.org, Brian Dooley , pablo.de.lara.guarch@intel.com, Ciara Power Subject: [PATCH v7 3/3] test/crypto: fix failing synchronous tests Date: Thu, 7 Sep 2023 16:12:58 +0000 Message-Id: <20230907161258.2288031-4-brian.dooley@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230907161258.2288031-1-brian.dooley@intel.com> References: <20230907102614.2269913-2-brian.dooley@intel.com> <20230907161258.2288031-1-brian.dooley@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Some synchronous tests do not support digest encrypted and need to be skipped. This commit adds in extra skips for these tests. Fixes: 55ab4a8c4fb5 ("test/crypto: disable wireless cases for CPU crypto API") Cc: pablo.de.lara.guarch@intel.com Acked-by: Ciara Power Signed-off-by: Brian Dooley --- app/test/test_cryptodev.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 956268bfcd..70f6b7ece1 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -6394,6 +6394,9 @@ test_zuc_auth_cipher(const struct wireless_test_data *tdata, tdata->digest.len) < 0) return TEST_SKIPPED; + if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) + return TEST_SKIPPED; + rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info); uint64_t feat_flags = dev_info.feature_flags; @@ -7829,6 +7832,9 @@ test_mixed_auth_cipher(const struct mixed_cipher_auth_test_data *tdata, if (global_api_test_type == CRYPTODEV_RAW_API_TEST) return TEST_SKIPPED; + if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO) + return TEST_SKIPPED; + rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info); uint64_t feat_flags = dev_info.feature_flags;