From patchwork Thu Aug 2 04:49:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "De Lara Guarch, Pablo" X-Patchwork-Id: 43526 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A00B11B4B5; Thu, 2 Aug 2018 14:56:09 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id A37181B485; Thu, 2 Aug 2018 14:56:07 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Aug 2018 05:56:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,435,1526367600"; d="scan'208";a="77717566" Received: from silpixa00399466.ir.intel.com (HELO silpixa00399466.ger.corp.intel.com) ([10.237.223.220]) by fmsmga001.fm.intel.com with ESMTP; 02 Aug 2018 05:56:05 -0700 From: Pablo de Lara To: konstantin.ananyev@intel.com, declan.doherty@intel.com Cc: dev@dpdk.org, Pablo de Lara , stable@dpdk.org Date: Thu, 2 Aug 2018 05:49:40 +0100 Message-Id: <20180802044940.23114-1-pablo.de.lara.guarch@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] crypto/aesni_mb: fix possible array overrun X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to process crypto operations in the AESNI MB PMD, they need to be sent to the buffer manager of the Multi-buffer library, through the "job" structure. Currently, it is checked if there are outstanding operations to process in the ring, before getting a new job. However, if there are no available jobs in the manager, a flush operation needs to take place, freeing some of the jobs, so it can be used for the outstanding operation. In order to avoid leaving the dequeued operation without being processed, the maximum number of operations that can be flushed is the remaining operations to return, which is the maximum number of operations that can be return minus the number of operations ready to be returned (nb_ops - processed_jobs), minus 1 (for the new operation). The problem comes when (nb_ops - processed_jobs) is 1 (last operation to dequeue). In that case, flush_mb_mgr is called with maximum number of operations equal to 0, which is wrong, causing a potential overrun in the "ops" array. Besides, the operation dequeued from the ring will be leaked, as no more operations can be returned. The solution is to first check if there are jobs available in the manager. If there are not, flush operation gets called, and if enough operations are returned from the manager, then no more outstanding operations get dequeued from the ring, avoiding both the memory leak and the array overrun. If there are enough jobs, the PMD tries to dequeue an operation from the ring. If there are no operations in the ring, the new job pointer is not used, and it will be used in the next get_next_job call, so no memory leak happens. Fixes: 0f548b50a160 ("crypto/aesni_mb: process crypto op on dequeue") Cc: stable@dpdk.org Signed-off-by: Pablo de Lara Acked-by: Konstantin Ananyev --- drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c index 93dc7a443..e2dd834f0 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c @@ -833,22 +833,30 @@ aesni_mb_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, uint8_t digest_idx = qp->digest_idx; do { - /* Get next operation to process from ingress queue */ - retval = rte_ring_dequeue(qp->ingress_queue, (void **)&op); - if (retval < 0) - break; - /* Get next free mb job struct from mb manager */ job = (*qp->op_fns->job.get_next)(qp->mb_mgr); if (unlikely(job == NULL)) { /* if no free mb job structs we need to flush mb_mgr */ processed_jobs += flush_mb_mgr(qp, &ops[processed_jobs], - (nb_ops - processed_jobs) - 1); + nb_ops - processed_jobs); + + if (nb_ops == processed_jobs) + break; job = (*qp->op_fns->job.get_next)(qp->mb_mgr); } + /* + * Get next operation to process from ingress queue. + * There is no need to return the job to the MB_MGR + * if there are no more operations to process, since the MB_MGR + * can use that pointer again in next get_next calls. + */ + retval = rte_ring_dequeue(qp->ingress_queue, (void **)&op); + if (retval < 0) + break; + retval = set_mb_job_params(job, qp, op, &digest_idx); if (unlikely(retval != 0)) { qp->stats.dequeue_err_count++;