From patchwork Tue Aug 30 01:13:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aleksandr Miloshenko X-Patchwork-Id: 115630 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E0324A04FD; Tue, 30 Aug 2022 03:14:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8AA6240F18; Tue, 30 Aug 2022 03:14:44 +0200 (CEST) Received: from mail13.f5.com (mail13.f5.com [104.219.104.14]) by mails.dpdk.org (Postfix) with ESMTP id C097740F17; Tue, 30 Aug 2022 03:14:43 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=f5.com; i=@f5.com; q=dns/txt; s=f5; t=1661822084; x=1693358084; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VVxD8snFgvq8SZzAM722balZcq4PNvrn8hzayD2/fok=; b=K7eHPFs8sinnjfKCVtOkBoNZshmpeurS7EOqBuj2c4cGHWDTjWujXPKP bH4eGntyEWhQ/+87TFi0PovfakX4xz03wy273hS6BLq8JEs+kPkq2FIds CAwOXpxEPMSCCPJUyPMRDnUHMz/xBXouYyX5AMWNH1zLECOh9lj8NjYSx fy79Ha42eSyQgkxJUlpdvivQl1otP/buJGI71dnSGx9e/FueIjixJrJ6k LyAQf6yExYxICdz2mJg1tLHqhEsPIJH30fA/jGr8+gzjSeG7GTZXuTTsk d8JtppPKY+rwcPL5BJ0K8LcWL9wVik71yHJ6l8wsmm7u3ak/iy/1/tw03 Q==; IronPort-SDR: UAYdau9W5ZC19u6pnbKUgvH+H1i95tkjq21LKHC/gR2t3BHXU0zco43xxYw9KwVRjYVBMwCM5A lRfSe2TfiOHDlO3FBDAbFUPukK4wU4jtCHgBAPFrOJdVIojrLl1zETJG1/dU2b+WkIuyfEGkxR 4cA7GZLgTMEUk0w0KzBW4iHWQSjGpGyAQMYDOkKfoubvfaD6VQc2UbgrW1Xcv3aM0/sIjEaZje Z3sO2F9HpKfO6ohgxQeB4c5iLPGfdi20ThJf121IonwEqtRSSWnxW4B+6jNSZhfILn1OP0H1lo dxM= X-IronPort-AV: E=McAfee;i="6500,9779,10454"; a="211125417" X-IronPort-AV: E=Sophos;i="5.93,274,1654585200"; d="scan'208";a="211125417" Received: from unknown (HELO localhost.localdomain) ([10.145.107.55]) by mail.f5net.com with ESMTP; 29 Aug 2022 18:14:42 -0700 From: Aleksandr Miloshenko To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, Aleksandr Miloshenko , stable@dpdk.org Subject: [PATCH v2] net/iavf: do Tx done cleanup starting from Tx tail Date: Mon, 29 Aug 2022 18:13:46 -0700 Message-Id: <20220830011346.24657-1-a.miloshenko@f5.com> X-Mailer: git-send-email 2.37.0 In-Reply-To: <20220707001414.25105-1-a.miloshenko@f5.com> References: <20220707001414.25105-1-a.miloshenko@f5.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org iavf_xmit_pkts() sets tx_tail to the next of last transmitted Tx descriptor. So the cleanup of Tx done descriptors must be started from tx_tail, not from the next of tx_tail. Otherwise rte_eth_tx_done_cleanup() doesn't free the first Tx done mbuf when tx queue is full. Fixes: 86e44244f95c ("net/iavf: cleanup Tx buffers") Cc: stable@dpdk.org Signed-off-by: Aleksandr Miloshenko Acked-by: Qi Zhang --- v2: * Fixed the commit style. drivers/net/iavf/iavf_rxtx.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 109ba756f8..7cd5db6e49 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -3184,14 +3184,14 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, uint32_t free_cnt) { struct iavf_tx_entry *swr_ring = txq->sw_ring; - uint16_t i, tx_last, tx_id; + uint16_t tx_last, tx_id; uint16_t nb_tx_free_last; uint16_t nb_tx_to_clean; - uint32_t pkt_cnt; + uint32_t pkt_cnt = 0; - /* Start free mbuf from the next of tx_tail */ - tx_last = txq->tx_tail; - tx_id = swr_ring[tx_last].next_id; + /* Start free mbuf from tx_tail */ + tx_id = txq->tx_tail; + tx_last = tx_id; if (txq->nb_free == 0 && iavf_xmit_cleanup(txq)) return 0; @@ -3204,10 +3204,8 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, /* Loop through swr_ring to count the amount of * freeable mubfs and packets. */ - for (pkt_cnt = 0; pkt_cnt < free_cnt; ) { - for (i = 0; i < nb_tx_to_clean && - pkt_cnt < free_cnt && - tx_id != tx_last; i++) { + while (pkt_cnt < free_cnt) { + do { if (swr_ring[tx_id].mbuf != NULL) { rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf); swr_ring[tx_id].mbuf = NULL; @@ -3220,7 +3218,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq, } tx_id = swr_ring[tx_id].next_id; - } + } while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last); if (txq->rs_thresh > txq->nb_tx_desc - txq->nb_free || tx_id == tx_last)