From patchwork Wed Nov 30 17:10:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ganapati Kundapura X-Patchwork-Id: 120383 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 37A25A00C2; Wed, 30 Nov 2022 18:10:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3066342D1A; Wed, 30 Nov 2022 18:10:22 +0100 (CET) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by mails.dpdk.org (Postfix) with ESMTP id B2B9E40693 for ; Wed, 30 Nov 2022 18:10:18 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669828218; x=1701364218; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zrnXL0YnDpc9fYRiDoAzDWDx8n8y/ndxxqcAPpfiwXc=; b=YyYdXItfdvxSYEIC+79idokak5CG+D6sXFJXOnBlMkhafGPvLKIkwrx+ 6025CTYnWSVe52vjOBm7S41Qj3b1o96xZ6GqcUwvxXwTJmpELxmZSsKEX Sn4YS7NGH4vYJ0APf60Brr3DSy7BUFw0ZQ5XkwdZ575DnxojzmZFdmV48 ONY+jE1qH2SrtC2YvrtTUAZ9yKHb9PMcpX8ibWymDF7RVdX6m6BqjYmK7 iiuIKGDja6KrJy7Q9rJzAtpJc1deNkZdTgT9oQNEw44XIZEJYJcDroqOk bMH+ukUU1hEzHYR/H1tqDIXHj/FFU7RAhAWm8XdtYNdqhwU37fhiiCwSG Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10547"; a="401730677" X-IronPort-AV: E=Sophos;i="5.96,207,1665471600"; d="scan'208";a="401730677" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Nov 2022 09:10:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10547"; a="973179935" X-IronPort-AV: E=Sophos;i="5.96,207,1665471600"; d="scan'208";a="973179935" Received: from txandevlnx322.an.intel.com ([10.123.117.44]) by fmsmga005.fm.intel.com with ESMTP; 30 Nov 2022 09:10:17 -0800 From: Ganapati Kundapura To: dev@dpdk.org, jerinj@marvell.com, s.v.naga.harish.k@intel.com, abhinandan.gujjar@intel.com Cc: jay.jayatheerthan@intel.com Subject: [PATCH v1 4/5] eventdev/crypto: overflow in circular buffer Date: Wed, 30 Nov 2022 11:10:13 -0600 Message-Id: <20221130171014.1723899-4-ganapati.kundapura@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20221130171014.1723899-1-ganapati.kundapura@intel.com> References: <20221130171014.1723899-1-ganapati.kundapura@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Crypto adapter checks CPM backpressure once in enq_run() This leads to buffer overflow if some ops failed to flush to cryptodev. Checked CPM backpressure for every iteration in enq_run() Signed-off-by: Ganapati Kundapura diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 72deedd..1d39c5b 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -573,14 +573,15 @@ eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter, if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) return 0; - if (unlikely(adapter->stop_enq_to_cryptodev)) { - nb_enqueued += eca_crypto_enq_flush(adapter); + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { - if (unlikely(adapter->stop_enq_to_cryptodev)) - goto skip_event_dequeue_burst; - } + if (unlikely(adapter->stop_enq_to_cryptodev)) { + nb_enqueued += eca_crypto_enq_flush(adapter); + + if (unlikely(adapter->stop_enq_to_cryptodev)) + break; + } - for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { stats->event_poll_count++; n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, BATCH_SIZE, 0); @@ -591,8 +592,6 @@ eca_crypto_adapter_enq_run(struct event_crypto_adapter *adapter, nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n); } -skip_event_dequeue_burst: - if ((++adapter->transmit_loop_count & (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) { nb_enqueued += eca_crypto_enq_flush(adapter);