From patchwork Tue Feb 22 16:06:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 107984 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 943AEA0352; Tue, 22 Feb 2022 17:07:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 583A54117E; Tue, 22 Feb 2022 17:07:03 +0100 (CET) Received: from mail-ed1-f53.google.com (mail-ed1-f53.google.com [209.85.208.53]) by mails.dpdk.org (Postfix) with ESMTP id B1F8840E64 for ; Tue, 22 Feb 2022 17:07:02 +0100 (CET) Received: by mail-ed1-f53.google.com with SMTP id m3so32116306eda.10 for ; Tue, 22 Feb 2022 08:07:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kJsrYm2uvyrl1AjemrWrKMybiEz+usep4Zai6FZc+2s=; b=J7o3UHT0ZgGA71X20WOdj9sxiN4waGp874+y9gZO/26y5IEJp7VeztPL0xYbhE62Gc twTLBjMA95viaY6YcFGIrJIVQE4v/E0L8f9mo8HBR7NRjZjtCMDgBpyaAQ+XN2TCxYNu S91TRe5FNypM2dZDeApbYXNo71lSlfyKsKmxgYRcIk1fcp1ZkzEB2ym2imr6UJUXmYz0 hlhCWhvakS/bM9o6P9XVx7eEAeBFmS2sXiklXhsjVfZysdAQOcg8iwiRKlKpI3gU895h 2Tjois0GfgW9NIMPhxWahM7ubxuPmw43at9bLhlS1KO8jGT26co/pukG0mXWpTeZBevx K8Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kJsrYm2uvyrl1AjemrWrKMybiEz+usep4Zai6FZc+2s=; b=GJ7DKn4egPegOdZ80HvPDUr71dryvXNmtwhM7ABtMMBO9BCX1JF7YZdqwOyW7J3KwW 1P+Nnc0UIt89C4cBr0nIQkKpBWPVobRED3/myH8WeUT2XfGy7dfyitLPufFETPrZDUrJ 7mXdgw4GQFA8BxDaCGKtBOlux/NvnZ4LiG80Wd/VP1CjlYaB8bQWZYdF/B+ELUbV/R5m bVXfIsAMot47geSZlo7Kc/NZATEcFJwljccT3kruM6qNo/iMTjww/W3VIDP4wTiy74SX Mstz2tSPWYrB4XJSUZ4igfndWkmjKIq9r8B/RoqZkoijzXHpT72txIYJqol61RUhADQt 4DRw== X-Gm-Message-State: AOAM530XzU4aH1m4clYxUheRrgs89sUBEKVpaXYA0xwQ3v+IA8yMWESQ 9hi/yrGdAatuRGvGAxvna9NNdsNc4oiP4g== X-Google-Smtp-Source: ABdhPJzvF8KvXb2fclAIIRFWJHYcZagukB26l3o8FhUstEflhFhTEtGXPvgteOx/A/TDi6DFpHGDcg== X-Received: by 2002:a05:6402:4245:b0:410:ee7d:8f0b with SMTP id g5-20020a056402424500b00410ee7d8f0bmr27145001edb.295.1645546021984; Tue, 22 Feb 2022 08:07:01 -0800 (PST) Received: from DESKTOP-U5LNN3J.localdomain (89-79-181-52.dynamic.chello.pl. [89.79.181.52]) by smtp.gmail.com with ESMTPSA id x6sm10013477edv.109.2022.02.22.08.07.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Feb 2022 08:07:01 -0800 (PST) From: Michal Krawczyk To: dev@dpdk.org Cc: shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Dawid Gorecki Subject: [PATCH 08/21] net/ena: perform Tx cleanup before sending pkts Date: Tue, 22 Feb 2022 17:06:21 +0100 Message-Id: <20220222160634.24489-9-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220222160634.24489-1-mk@semihalf.com> References: <20220222160634.24489-1-mk@semihalf.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To increase likehood that current burst will fit in the HW rings, perform Tx cleanup before pushing packets to the HW. It may increase latency a bit for sparse bursts, but the Tx flow now should be more smooth. It's also common order in the Tx burst function for other PMDs. Signed-off-by: Michal Krawczyk Reviewed-by: Dawid Gorecki Reviewed-by: Shai Brandes --- drivers/net/ena/ena_ethdev.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 4b82372155..ed3dd162ba 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2776,6 +2776,10 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } #endif + available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); + if (available_desc < tx_ring->tx_free_thresh) + ena_tx_cleanup(tx_ring); + for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx])) break; @@ -2784,9 +2788,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->size_mask)]); } - available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); - tx_ring->tx_stats.available_desc = available_desc; - /* If there are ready packets to be xmitted... */ if (likely(tx_ring->pkts_without_db)) { /* ...let HW do its best :-) */ @@ -2795,9 +2796,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->pkts_without_db = false; } - if (available_desc < tx_ring->tx_free_thresh) - ena_tx_cleanup(tx_ring); - tx_ring->tx_stats.available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); tx_ring->tx_stats.tx_poll++;