From patchwork Tue May 16 15:24:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 126894 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9025042B24; Tue, 16 May 2023 17:24:38 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCBE842D36; Tue, 16 May 2023 17:24:31 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 9231A4114A for ; Tue, 16 May 2023 17:24:29 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1684250669; x=1715786669; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jtOmUk2W66UDKOIp/ofoJGWsTBm/piPASYHE8lPdjcU=; b=OnjABpOnhO5x7TyTxetw6Z0AfRkqNLSRMmjUMxnqsDCLrM0Ymcah/u1z 9hYAawNHYeJRX+Yekeqb5Rl8+Y6yhIcNHnszlyg8D/FwHWgxP+ri7Anu1 3HDBtp+ZpQsV4ZebZVijuhTDxxnfy7oi/FpZVcxp8zxwwus7rWGfKJINq Wes+HAbgzv0OenbBUb6p5L4zCFVGKzCztdLuRwJiSG2ThteGr2FkXYZgY AAV+Z0a2yPPwaBtIcDFgUIe+5Qy7OoUyBDeIsUSv8bniHjgn/vu1Gk/kN kj451lYGuMKXWSkj7a02wuNn+zP0wk2G40Z5xPDDMVJ0USyYlGgq5Vc0N A==; X-IronPort-AV: E=McAfee;i="6600,9927,10712"; a="353789078" X-IronPort-AV: E=Sophos;i="5.99,278,1677571200"; d="scan'208";a="353789078" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 May 2023 08:24:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10712"; a="695500647" X-IronPort-AV: E=Sophos;i="5.99,278,1677571200"; d="scan'208";a="695500647" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by orsmga007.jf.intel.com with ESMTP; 16 May 2023 08:24:27 -0700 From: Ciara Power To: dev@dpdk.org Cc: kai.ji@intel.com, gakhil@marvell.com, Marcel Cornu , Pablo de Lara , Ciara Power Subject: [PATCH v2 2/8] crypto/ipsec_mb: use burst API in aesni_mb Date: Tue, 16 May 2023 15:24:16 +0000 Message-Id: <20230516152422.606617-3-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230516152422.606617-1-ciara.power@intel.com> References: <20230421131221.1732314-1-ciara.power@intel.com> <20230516152422.606617-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Marcel Cornu Use new ipsec_mb burst API in dequeue burst function, when ipsec_mb version is v1.3 or newer. Signed-off-by: Marcel Cornu Signed-off-by: Pablo de Lara Signed-off-by: Ciara Power --- v2: moved some functions inside ifdef as they are only used when IPSec_MB version is 1.2 or lower. --- drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 202 ++++++++++++++++++++----- 1 file changed, 167 insertions(+), 35 deletions(-) diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index c53548aa3b..b22c0183eb 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -9,6 +9,10 @@ struct aesni_mb_op_buf_data { uint32_t offset; }; +#if IMB_VERSION(1, 2, 0) < IMB_VERSION_NUM +static IMB_JOB *jobs[IMB_MAX_BURST_SIZE] = {NULL}; +#endif + /** * Calculate the authentication pre-computes * @@ -1884,6 +1888,168 @@ post_process_mb_sync_job(IMB_JOB *job) st[0] = (job->status == IMB_STATUS_COMPLETED) ? 0 : EBADMSG; } +static inline uint32_t +handle_completed_sync_jobs(IMB_JOB *job, IMB_MGR *mb_mgr) +{ + uint32_t i; + + for (i = 0; job != NULL; i++, job = IMB_GET_COMPLETED_JOB(mb_mgr)) + post_process_mb_sync_job(job); + + return i; +} + +static inline uint32_t +flush_mb_sync_mgr(IMB_MGR *mb_mgr) +{ + IMB_JOB *job; + + job = IMB_FLUSH_JOB(mb_mgr); + return handle_completed_sync_jobs(job, mb_mgr); +} + +static inline IMB_JOB * +set_job_null_op(IMB_JOB *job, struct rte_crypto_op *op) +{ + job->chain_order = IMB_ORDER_HASH_CIPHER; + job->cipher_mode = IMB_CIPHER_NULL; + job->hash_alg = IMB_AUTH_NULL; + job->cipher_direction = IMB_DIR_DECRYPT; + + /* Set user data to be crypto operation data struct */ + job->user_data = op; + + return job; +} + +#if IMB_VERSION(1, 2, 0) < IMB_VERSION_NUM +static uint16_t +aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct ipsec_mb_qp *qp = queue_pair; + IMB_MGR *mb_mgr = qp->mb_mgr; + struct rte_crypto_op *op; + struct rte_crypto_op *deqd_ops[IMB_MAX_BURST_SIZE]; + IMB_JOB *job; + int retval, processed_jobs = 0; + uint16_t i, nb_jobs; + + if (unlikely(nb_ops == 0 || mb_mgr == NULL)) + return 0; + + uint8_t digest_idx = qp->digest_idx; + uint16_t burst_sz = (nb_ops > IMB_MAX_BURST_SIZE) ? + IMB_MAX_BURST_SIZE : nb_ops; + + /* + * If nb_ops is greater than the max supported + * ipsec_mb burst size, then process in bursts of + * IMB_MAX_BURST_SIZE until all operations are submitted + */ + while (nb_ops) { + uint16_t nb_submit_ops; + uint16_t n = (nb_ops / burst_sz) ? + burst_sz : nb_ops; + + while (unlikely((IMB_GET_NEXT_BURST(mb_mgr, n, jobs)) < n)) { + /* + * Not enough free jobs in the queue + * Flush n jobs until enough jobs available + */ + nb_jobs = IMB_FLUSH_BURST(mb_mgr, n, jobs); + for (i = 0; i < nb_jobs; i++) { + job = jobs[i]; + + op = post_process_mb_job(qp, job); + if (op) { + ops[processed_jobs++] = op; + qp->stats.dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + break; + } + } + } + + /* + * Get the next operations to process from ingress queue. + * There is no need to return the job to the IMB_MGR + * if there are no more operations to process, since + * the IMB_MGR can use that pointer again in next + * get_next calls. + */ + nb_submit_ops = rte_ring_dequeue_burst(qp->ingress_queue, + (void **)deqd_ops, n, NULL); + for (i = 0; i < nb_submit_ops; i++) { + job = jobs[i]; + op = deqd_ops[i]; + +#ifdef AESNI_MB_DOCSIS_SEC_ENABLED + if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) + retval = set_sec_mb_job_params(job, qp, op, + &digest_idx); + else +#endif + retval = set_mb_job_params(job, qp, op, + &digest_idx, mb_mgr); + + if (unlikely(retval != 0)) { + qp->stats.dequeue_err_count++; + set_job_null_op(job, op); + } + } + + /* Submit jobs to multi-buffer for processing */ +#ifdef RTE_LIBRTE_PMD_AESNI_MB_DEBUG + int err = 0; + + nb_jobs = IMB_SUBMIT_BURST(mb_mgr, nb_submit_ops, jobs); + err = imb_get_errno(mb_mgr); + if (err) + IPSEC_MB_LOG(ERR, "%s", imb_get_strerror(err)); +#else + nb_jobs = IMB_SUBMIT_BURST_NOCHECK(mb_mgr, + nb_submit_ops, jobs); +#endif + for (i = 0; i < nb_jobs; i++) { + job = jobs[i]; + + op = post_process_mb_job(qp, job); + if (op) { + ops[processed_jobs++] = op; + qp->stats.dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + break; + } + } + + qp->digest_idx = digest_idx; + + if (processed_jobs < 1) { + nb_jobs = IMB_FLUSH_BURST(mb_mgr, n, jobs); + + for (i = 0; i < nb_jobs; i++) { + job = jobs[i]; + + op = post_process_mb_job(qp, job); + if (op) { + ops[processed_jobs++] = op; + qp->stats.dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + break; + } + } + } + nb_ops -= n; + } + + return processed_jobs; +} +#else + /** * Process a completed IMB_JOB job and keep processing jobs until * get_completed_job return NULL @@ -1924,26 +2090,6 @@ handle_completed_jobs(struct ipsec_mb_qp *qp, IMB_MGR *mb_mgr, return processed_jobs; } -static inline uint32_t -handle_completed_sync_jobs(IMB_JOB *job, IMB_MGR *mb_mgr) -{ - uint32_t i; - - for (i = 0; job != NULL; i++, job = IMB_GET_COMPLETED_JOB(mb_mgr)) - post_process_mb_sync_job(job); - - return i; -} - -static inline uint32_t -flush_mb_sync_mgr(IMB_MGR *mb_mgr) -{ - IMB_JOB *job; - - job = IMB_FLUSH_JOB(mb_mgr); - return handle_completed_sync_jobs(job, mb_mgr); -} - static inline uint16_t flush_mb_mgr(struct ipsec_mb_qp *qp, IMB_MGR *mb_mgr, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -1960,20 +2106,6 @@ flush_mb_mgr(struct ipsec_mb_qp *qp, IMB_MGR *mb_mgr, return processed_ops; } -static inline IMB_JOB * -set_job_null_op(IMB_JOB *job, struct rte_crypto_op *op) -{ - job->chain_order = IMB_ORDER_HASH_CIPHER; - job->cipher_mode = IMB_CIPHER_NULL; - job->hash_alg = IMB_AUTH_NULL; - job->cipher_direction = IMB_DIR_DECRYPT; - - /* Set user data to be crypto operation data struct */ - job->user_data = op; - - return job; -} - static uint16_t aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -2054,7 +2186,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return processed_jobs; } - +#endif static inline int check_crypto_sgl(union rte_crypto_sym_ofs so, const struct rte_crypto_sgl *sgl) {