From patchwork Tue May 16 15:24:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 126898 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AA36042B24; Tue, 16 May 2023 17:25:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 980EF42D4E; Tue, 16 May 2023 17:24:40 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 7914F4114A for ; Tue, 16 May 2023 17:24:38 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684250678; x=1715786678; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=TVkvwoNBoCgq6n9k66F5KOsrzOjom8I2G6X1R8BuvjI=; b=B/qERBfSnZmIg+jUtqoqLCYvNXPIQk2PZKa5xzUTHJe9Mz8dcCPlH2Ea xbhc7HvoaPqu6m9POttF0oB5s4A4vaCnQR+vg6rTghc+P+aDM9ETbL2fn 4sibzkJOp0/60CRayDqhT4o8nupAcuj4AEuxm8KaN8yc/Tm2NHu3/OEWY 83mAPEfMdXL5a1nQV547K5p3mNfPH1tyMbAbEqQO6IAWULiA0PwkJuy9O HnO6Z+822I+EifeHxdrlf4aTqx00h/HepvMKNjrDcOvxoKN4gCVng2ahf tWpq3b2XuFPCR5l3cyxfpaP2ZTilredrMhelN/6EGpiov8MHq+dFn0GmX A==; X-IronPort-AV: E=McAfee;i="6600,9927,10712"; a="353789138" X-IronPort-AV: E=Sophos;i="5.99,278,1677571200"; d="scan'208";a="353789138" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 May 2023 08:24:37 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10712"; a="695500715" X-IronPort-AV: E=Sophos;i="5.99,278,1677571200"; d="scan'208";a="695500715" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by orsmga007.jf.intel.com with ESMTP; 16 May 2023 08:24:36 -0700 From: Ciara Power To: dev@dpdk.org Cc: kai.ji@intel.com, gakhil@marvell.com, Pablo de Lara , Ciara Power Subject: [PATCH v2 6/8] crypto/ipsec_mb: optimize for GCM case Date: Tue, 16 May 2023 15:24:20 +0000 Message-Id: <20230516152422.606617-7-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516152422.606617-1-ciara.power@intel.com> References: <20230421131221.1732314-1-ciara.power@intel.com> <20230516152422.606617-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pablo de Lara Use a separate code path when dealing with AES-GCM. Signed-off-by: Pablo de Lara Signed-off-by: Ciara Power --- drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 88 +++++++++++++++++++++++--- 1 file changed, 79 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index 80f59e75de..58faf3502c 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -1366,6 +1366,70 @@ multi_sgl_job(IMB_JOB *job, struct rte_crypto_op *op, } return 0; } + +static inline int +set_gcm_job(IMB_MGR *mb_mgr, IMB_JOB *job, const uint8_t sgl, + struct aesni_mb_qp_data *qp_data, + struct rte_crypto_op *op, uint8_t *digest_idx, + const struct aesni_mb_session *session, + struct rte_mbuf *m_src, struct rte_mbuf *m_dst, + const int oop) +{ + const uint32_t m_offset = op->sym->aead.data.offset; + + job->u.GCM.aad = op->sym->aead.aad.data; + if (sgl) { + job->u.GCM.ctx = &qp_data->gcm_sgl_ctx; + job->cipher_mode = IMB_CIPHER_GCM_SGL; + job->hash_alg = IMB_AUTH_GCM_SGL; + job->hash_start_src_offset_in_bytes = 0; + job->msg_len_to_hash_in_bytes = 0; + job->msg_len_to_cipher_in_bytes = 0; + job->cipher_start_src_offset_in_bytes = 0; + } else { + job->hash_start_src_offset_in_bytes = + op->sym->aead.data.offset; + job->msg_len_to_hash_in_bytes = + op->sym->aead.data.length; + job->cipher_start_src_offset_in_bytes = + op->sym->aead.data.offset; + job->msg_len_to_cipher_in_bytes = op->sym->aead.data.length; + } + + if (session->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { + job->auth_tag_output = qp_data->temp_digests[*digest_idx]; + *digest_idx = (*digest_idx + 1) % IMB_MAX_JOBS; + } else { + job->auth_tag_output = op->sym->aead.digest.data; + } + + job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, + session->iv.offset); + + /* Set user data to be crypto operation data struct */ + job->user_data = op; + + if (sgl) { + job->src = NULL; + job->dst = NULL; + +#if IMB_VERSION(1, 2, 0) < IMB_VERSION_NUM + if (m_src->nb_segs <= MAX_NUM_SEGS) + return single_sgl_job(job, op, oop, + m_offset, m_src, m_dst, + qp_data->sgl_segs); + else +#endif + return multi_sgl_job(job, op, oop, + m_offset, m_src, m_dst, mb_mgr); + } else { + job->src = rte_pktmbuf_mtod(m_src, uint8_t *); + job->dst = rte_pktmbuf_mtod_offset(m_dst, uint8_t *, m_offset); + } + + return 0; +} + /** * Process a crypto operation and complete a IMB_JOB job structure for * submission to the multi buffer library for processing. @@ -1403,10 +1467,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, return -1; } - memcpy(job, &session->template_job, sizeof(IMB_JOB)); + const IMB_CIPHER_MODE cipher_mode = + session->template_job.cipher_mode; - /* Set authentication parameters */ - const int aead = is_aead_algo(job->hash_alg, job->cipher_mode); + memcpy(job, &session->template_job, sizeof(IMB_JOB)); if (!op->sym->m_dst) { /* in-place operation */ @@ -1424,10 +1488,17 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, if (m_src->nb_segs > 1 || m_dst->nb_segs > 1) { sgl = 1; - if (!imb_lib_support_sgl_algo(job->cipher_mode)) + if (!imb_lib_support_sgl_algo(cipher_mode)) lb_sgl = 1; } + if (cipher_mode == IMB_CIPHER_GCM) + return set_gcm_job(mb_mgr, job, sgl, qp_data, + op, digest_idx, session, m_src, m_dst, oop); + + /* Set authentication parameters */ + const int aead = is_aead_algo(job->hash_alg, cipher_mode); + switch (job->hash_alg) { case IMB_AUTH_AES_CCM: job->u.CCM.aad = op->sym->aead.aad.data + 18; @@ -1474,13 +1545,12 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, else m_offset = op->sym->cipher.data.offset; - if (job->cipher_mode == IMB_CIPHER_ZUC_EEA3) { + if (cipher_mode == IMB_CIPHER_ZUC_EEA3) m_offset >>= 3; - } else if (job->cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN) { + else if (cipher_mode == IMB_CIPHER_SNOW3G_UEA2_BITLEN) m_offset = 0; - } else if (job->cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) { + else if (cipher_mode == IMB_CIPHER_KASUMI_UEA1_BITLEN) m_offset = 0; - } /* Set digest output location */ if (job->hash_alg != IMB_AUTH_NULL && @@ -1642,7 +1712,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, job->msg_len_to_cipher_in_bytes = op->sym->cipher.data.length; } - if (job->cipher_mode == IMB_CIPHER_NULL && oop) { + if (cipher_mode == IMB_CIPHER_NULL && oop) { memcpy(job->dst + job->cipher_start_src_offset_in_bytes, job->src + job->cipher_start_src_offset_in_bytes, job->msg_len_to_cipher_in_bytes);