From patchwork Fri Apr 21 13:12:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 126390 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEFCD429AD; Fri, 21 Apr 2023 15:12:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0EFF942D0B; Fri, 21 Apr 2023 15:12:35 +0200 (CEST) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id BDD394114B for ; Fri, 21 Apr 2023 15:12:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682082752; x=1713618752; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=bpfEtkW9RAKKGKZGWuUmt3vz7R8QH+LLew6CWJvmbak=; b=SWaEHekLdT+2RMDabQ4QMcb20mWibC9DWz44q7GgtZbvo8cx1TGSBqpX n9J/hoS7nbp7ZkII4ohMZT4Fdm5HpU5gwCUJLOWFtVYZeqxbki999rpFe pVuzRGjnhM4zvv7PlkyjaJDNxIWhB1c7a9bc1iAfLgvmKO4c9xIFGXnE4 dqsWxguFhQeFU7gxF2dxG36EhPzQJ8N4BVYp2ZSOKFRVM330shzPALFkP 5UzfRQhHpk5XnEo0rzIOrtwhKGlrhmslJyCVyKQsqhMnad3MIgUUoLXJy jsD83giih+F+mx9vNPdWbnN8OP1uTuJqsec8sk1AxJPb7x2/35z+Yx45U w==; X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="408927704" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="408927704" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2023 06:12:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10686"; a="724817346" X-IronPort-AV: E=Sophos;i="5.99,214,1677571200"; d="scan'208";a="724817346" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by orsmga001.jf.intel.com with ESMTP; 21 Apr 2023 06:12:29 -0700 From: Ciara Power To: dev@dpdk.org Cc: kai.ji@intel.com, Marcel Cornu , Pablo de Lara , Ciara Power Subject: [PATCH 2/8] crypto/ipsec_mb: use burst API in aesni_mb Date: Fri, 21 Apr 2023 13:12:14 +0000 Message-Id: <20230421131221.1732314-3-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230421131221.1732314-1-ciara.power@intel.com> References: <20230421131221.1732314-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Marcel Cornu Use new ipsec_mb burst API in dequeue burst function, when ipsec_mb version is v1.3 or newer. Signed-off-by: Marcel Cornu Signed-off-by: Pablo de Lara Signed-off-by: Ciara Power --- drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 133 ++++++++++++++++++++++++- 1 file changed, 132 insertions(+), 1 deletion(-) diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index c53548aa3b..5789b82d8e 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -9,6 +9,10 @@ struct aesni_mb_op_buf_data { uint32_t offset; }; +#if IMB_VERSION(1, 2, 0) < IMB_VERSION_NUM +static IMB_JOB *jobs[IMB_MAX_BURST_SIZE] = {NULL}; +#endif + /** * Calculate the authentication pre-computes * @@ -1974,6 +1978,133 @@ set_job_null_op(IMB_JOB *job, struct rte_crypto_op *op) return job; } +#if IMB_VERSION(1, 2, 0) < IMB_VERSION_NUM +static uint16_t +aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct ipsec_mb_qp *qp = queue_pair; + IMB_MGR *mb_mgr = qp->mb_mgr; + struct rte_crypto_op *op; + struct rte_crypto_op *deqd_ops[IMB_MAX_BURST_SIZE]; + IMB_JOB *job; + int retval, processed_jobs = 0; + uint16_t i, nb_jobs; + + if (unlikely(nb_ops == 0 || mb_mgr == NULL)) + return 0; + + uint8_t digest_idx = qp->digest_idx; + uint16_t burst_sz = (nb_ops > IMB_MAX_BURST_SIZE) ? + IMB_MAX_BURST_SIZE : nb_ops; + + /* + * If nb_ops is greater than the max supported + * ipsec_mb burst size, then process in bursts of + * IMB_MAX_BURST_SIZE until all operations are submitted + */ + while (nb_ops) { + uint16_t nb_submit_ops; + uint16_t n = (nb_ops / burst_sz) ? + burst_sz : nb_ops; + + while (unlikely((IMB_GET_NEXT_BURST(mb_mgr, n, jobs)) < n)) { + /* + * Not enough free jobs in the queue + * Flush n jobs until enough jobs available + */ + nb_jobs = IMB_FLUSH_BURST(mb_mgr, n, jobs); + for (i = 0; i < nb_jobs; i++) { + job = jobs[i]; + + op = post_process_mb_job(qp, job); + if (op) { + ops[processed_jobs++] = op; + qp->stats.dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + break; + } + } + } + + /* + * Get the next operations to process from ingress queue. + * There is no need to return the job to the IMB_MGR + * if there are no more operations to process, since + * the IMB_MGR can use that pointer again in next + * get_next calls. + */ + nb_submit_ops = rte_ring_dequeue_burst(qp->ingress_queue, + (void **)deqd_ops, n, NULL); + for (i = 0; i < nb_submit_ops; i++) { + job = jobs[i]; + op = deqd_ops[i]; + +#ifdef AESNI_MB_DOCSIS_SEC_ENABLED + if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) + retval = set_sec_mb_job_params(job, qp, op, + &digest_idx); + else +#endif + retval = set_mb_job_params(job, qp, op, + &digest_idx, mb_mgr); + + if (unlikely(retval != 0)) { + qp->stats.dequeue_err_count++; + set_job_null_op(job, op); + } + } + + /* Submit jobs to multi-buffer for processing */ +#ifdef RTE_LIBRTE_PMD_AESNI_MB_DEBUG + int err = 0; + + nb_jobs = IMB_SUBMIT_BURST(mb_mgr, nb_submit_ops, jobs); + err = imb_get_errno(mb_mgr); + if (err) + IPSEC_MB_LOG(ERR, "%s", imb_get_strerror(err)); +#else + nb_jobs = IMB_SUBMIT_BURST_NOCHECK(mb_mgr, + nb_submit_ops, jobs); +#endif + for (i = 0; i < nb_jobs; i++) { + job = jobs[i]; + + op = post_process_mb_job(qp, job); + if (op) { + ops[processed_jobs++] = op; + qp->stats.dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + break; + } + } + + qp->digest_idx = digest_idx; + + if (processed_jobs < 1) { + nb_jobs = IMB_FLUSH_BURST(mb_mgr, n, jobs); + + for (i = 0; i < nb_jobs; i++) { + job = jobs[i]; + + op = post_process_mb_job(qp, job); + if (op) { + ops[processed_jobs++] = op; + qp->stats.dequeued_count++; + } else { + qp->stats.dequeue_err_count++; + break; + } + } + } + nb_ops -= n; + } + + return processed_jobs; +} +#else static uint16_t aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -2054,7 +2185,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, return processed_jobs; } - +#endif static inline int check_crypto_sgl(union rte_crypto_sym_ofs so, const struct rte_crypto_sgl *sgl) {