From patchwork Wed Mar 29 13:42:53 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergio Gonzalez Monroy X-Patchwork-Id: 22726 X-Patchwork-Delegate: pablo.de.lara.guarch@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 29225F97E; Wed, 29 Mar 2017 18:44:03 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 81792CF80; Wed, 29 Mar 2017 15:42:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=intel.com; i=@intel.com; q=dns/txt; s=intel; t=1490794979; x=1522330979; h=from:to:cc:subject:date:message-id; bh=lfQaBfd+uLg2UIVrd6pKx6uSs/4EZq9YlfMPrwKUIHQ=; b=u8Tl7TCjlMOoQ/86kjPhuAZWV+xarSm4I0Lil6k3Y2WOeGb2kVgRyiJ2 qMT+xAXYnexJzNYVVOT35YtY80nF/g==; Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 29 Mar 2017 06:42:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.36,241,1486454400"; d="scan'208";a="66436871" Received: from silpixa00397517.ir.intel.com (HELO silpixa00397517.ger.corp.intel.com) ([10.237.222.54]) by orsmga002.jf.intel.com with ESMTP; 29 Mar 2017 06:42:54 -0700 From: Sergio Gonzalez Monroy To: dev@dpdk.org Cc: declan.doherty@intel.com, pablo.de.lara.guarch@intel.com, stable@dpdk.org Date: Wed, 29 Mar 2017 14:42:53 +0100 Message-Id: <20170329134253.31909-1-sergio.gonzalez.monroy@intel.com> X-Mailer: git-send-email 2.9.3 Subject: [dpdk-dev] [PATCH] crypto/aesni_gcm: do crypto op in dequeue function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There is bug when more crypto ops are enqueued than dequeued. The return value is not checked when trying to enqueue the processed crypto op into the internal ring, which in the case of being full will results in crypto ops and mbufs being leaked. The issue is more obvious with different cores doing enqueue/dequeue. This patch moves the crypto operation to the dequeue function which fixes the above issue without having to check for the number of free entries in the ring. Fixes: eec136f3c54f ("aesni_gcm: add driver for AES-GCM crypto operations") Signed-off-by: Sergio Gonzalez Monroy Acked-by: Declan Doherty --- drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 31 +++++++++++++++++-------------- 1 file changed, 17 insertions(+), 14 deletions(-) diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index a2d10a5..0ca834e 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -375,55 +375,58 @@ handle_completed_gcm_crypto_op(struct aesni_gcm_qp *qp, rte_mempool_put(qp->sess_mp, op->sym->session); op->sym->session = NULL; } - - rte_ring_enqueue(qp->processed_pkts, (void *)op); } static uint16_t -aesni_gcm_pmd_enqueue_burst(void *queue_pair, +aesni_gcm_pmd_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops) { struct aesni_gcm_session *sess; struct aesni_gcm_qp *qp = queue_pair; - int i, retval = 0; + int retval = 0; + unsigned i, nb_dequeued; + + nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts, + (void **)ops, nb_ops); - for (i = 0; i < nb_ops; i++) { + for (i = 0; i < nb_dequeued; i++) { sess = aesni_gcm_get_session(qp, ops[i]->sym); if (unlikely(sess == NULL)) { ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - qp->qp_stats.enqueue_err_count++; + qp->qp_stats.dequeue_err_count++; break; } retval = process_gcm_crypto_op(ops[i]->sym, sess); if (retval < 0) { ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; - qp->qp_stats.enqueue_err_count++; + qp->qp_stats.dequeue_err_count++; break; } handle_completed_gcm_crypto_op(qp, ops[i]); - - qp->qp_stats.enqueued_count++; } + + qp->qp_stats.dequeued_count += i; + return i; } static uint16_t -aesni_gcm_pmd_dequeue_burst(void *queue_pair, +aesni_gcm_pmd_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops) { struct aesni_gcm_qp *qp = queue_pair; - unsigned nb_dequeued; + unsigned nb_enqueued; - nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts, + nb_enqueued = rte_ring_enqueue_burst(qp->processed_pkts, (void **)ops, nb_ops); - qp->qp_stats.dequeued_count += nb_dequeued; + qp->qp_stats.enqueued_count += nb_enqueued; - return nb_dequeued; + return nb_enqueued; } static int aesni_gcm_remove(const char *name);