From patchwork Thu Nov 23 17:15:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Power, Ciara" X-Patchwork-Id: 134590 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1FA2433AD; Thu, 23 Nov 2023 18:15:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A548942F03; Thu, 23 Nov 2023 18:15:58 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 9FE4540A73; Thu, 23 Nov 2023 18:15:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700759757; x=1732295757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/YDfX/NXoAEI544h6pyYaICavugjXDhqI9fiyI0fT3E=; b=G/YGOMpWXZeqgxHEBbvXI+T0pinOOcsW5dSGzFfufpQRiOJL7UtRZZrL tFciwkiay+5AYRwBi7tNBY2Xx+TVooeR8BhtuOPLxB3a6flMR4CXrbOyJ YPaA11Poz41kGoN7O4y5YLZAtoGaiX7Y6/WKjr7lqzF/ke04EoDjMNZ+c LPrBP2454fc5fW8WvRBocIIzYQm9dz/WkN3kKh/Jtv1hZOq+K6vaOH9eE Hq2fms7iJQ941ZO0bq4JxrAl4i7Dbfpxvyz7q9PTQVhTt5c3z4219NXkf wxDkN+V9apiHxOmCQJlbSsc8+d9zwOOiA+Krscz/zDWZQdiEqZEGpeCfT A==; X-IronPort-AV: E=McAfee;i="6600,9927,10902"; a="382704196" X-IronPort-AV: E=Sophos;i="6.04,222,1695711600"; d="scan'208";a="382704196" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Nov 2023 09:15:55 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10902"; a="940682463" X-IronPort-AV: E=Sophos;i="6.04,222,1695711600"; d="scan'208";a="940682463" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by orsmga005.jf.intel.com with ESMTP; 23 Nov 2023 09:15:53 -0800 From: Ciara Power To: dev@dpdk.org Cc: thomas@monjalon.net, kai.ji@intel.com, pablo.de.lara.guarch@intel.com, Ciara Power , stable@dpdk.org Subject: [PATCH v2] crypto/ipsec_mb: fix getting process ID per job Date: Thu, 23 Nov 2023 17:15:43 +0000 Message-Id: <20231123171544.906577-1-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231123170701.901946-1-ciara.power@intel.com> References: <20231123170701.901946-1-ciara.power@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, when using IPsec-mb 1.4+, the process ID is obtained for each job in a burst with a call to getpid(). This system call uses too many CPU cycles, and is unnecessary per job. Instead, set the process ID value per lcore. This is read when processing the burst, instead of per job. Fixes: 9593d83e5d88 ("crypto/ipsec_mb: fix aesni_mb multi-process session ID") Cc: stable@dpdk.org Signed-off-by: Ciara Power Acked-by: Kai Ji Acked-by: Pablo de Lara --- drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c index ece9cfd5ed..4de4866cf3 100644 --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c @@ -6,6 +6,8 @@ #include "pmd_aesni_mb_priv.h" +RTE_DEFINE_PER_LCORE(pid_t, pid); + struct aesni_mb_op_buf_data { struct rte_mbuf *m; uint32_t offset; @@ -846,6 +848,7 @@ aesni_mb_session_configure(IMB_MGR *mb_mgr, #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM sess->session_id = imb_set_session(mb_mgr, &sess->template_job); sess->pid = getpid(); + RTE_PER_LCORE(pid) = sess->pid; #endif return 0; @@ -1503,7 +1506,7 @@ aesni_mb_digest_appended_in_src(struct rte_crypto_op *op, IMB_JOB *job, static inline int set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, struct rte_crypto_op *op, uint8_t *digest_idx, - IMB_MGR *mb_mgr) + IMB_MGR *mb_mgr, pid_t pid) { struct rte_mbuf *m_src = op->sym->m_src, *m_dst; struct aesni_mb_qp_data *qp_data = ipsec_mb_get_qp_private_data(qp); @@ -1517,6 +1520,10 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, uint8_t sgl = 0; uint8_t lb_sgl = 0; +#if IMB_VERSION(1, 3, 0) >= IMB_VERSION_NUM + (void) pid; +#endif + session = ipsec_mb_get_session_private(qp, op); if (session == NULL) { op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; @@ -1527,7 +1534,7 @@ set_mb_job_params(IMB_JOB *job, struct ipsec_mb_qp *qp, session->template_job.cipher_mode; #if IMB_VERSION(1, 3, 0) < IMB_VERSION_NUM - if (session->pid != getpid()) { + if (session->pid != pid) { memcpy(job, &session->template_job, sizeof(IMB_JOB)); imb_set_session(mb_mgr, job); } else if (job->session_id != session->session_id) @@ -2136,6 +2143,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, int retval, processed_jobs = 0; uint16_t i, nb_jobs; IMB_JOB *jobs[IMB_MAX_BURST_SIZE] = {NULL}; + pid_t pid; if (unlikely(nb_ops == 0 || mb_mgr == NULL)) return 0; @@ -2176,6 +2184,11 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, continue; } + if (!RTE_PER_LCORE(pid)) + RTE_PER_LCORE(pid) = getpid(); + + pid = RTE_PER_LCORE(pid); + /* * Get the next operations to process from ingress queue. * There is no need to return the job to the IMB_MGR @@ -2194,7 +2207,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, &digest_idx); else retval = set_mb_job_params(job, qp, op, - &digest_idx, mb_mgr); + &digest_idx, mb_mgr, pid); if (unlikely(retval != 0)) { qp->stats.dequeue_err_count++; @@ -2317,6 +2330,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, struct rte_crypto_op *op; IMB_JOB *job; int retval, processed_jobs = 0; + pid_t pid = 0; if (unlikely(nb_ops == 0 || mb_mgr == NULL)) return 0; @@ -2353,7 +2367,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, &digest_idx); else retval = set_mb_job_params(job, qp, op, - &digest_idx, mb_mgr); + &digest_idx, mb_mgr, pid); if (unlikely(retval != 0)) { qp->stats.dequeue_err_count++;