From patchwork Tue Feb 22 18:11:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 108016 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4713A034E; Tue, 22 Feb 2022 19:13:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 37EFE41157; Tue, 22 Feb 2022 19:12:24 +0100 (CET) Received: from mail-ej1-f48.google.com (mail-ej1-f48.google.com [209.85.218.48]) by mails.dpdk.org (Postfix) with ESMTP id 133C341165 for ; Tue, 22 Feb 2022 19:12:20 +0100 (CET) Received: by mail-ej1-f48.google.com with SMTP id a8so45560610ejc.8 for ; Tue, 22 Feb 2022 10:12:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kJsrYm2uvyrl1AjemrWrKMybiEz+usep4Zai6FZc+2s=; b=vGjUGwcowreTRVhkb3rWICy6jacJCjgNUZLWTuG+82ykBbndU/iAZa5avGSri5aQjb 1G7VyubEw3qsC1HwTfqmqwKkXcr/155tqDBZT9+azixypZHOWHH42t2OT+t78L8fcIxR WtfAZsjfHRWTbF2o+Twg/GxgEkhUo2ERtYCVCYjXgQ+ZO9qeUv9LjacyAzcT8ZSde1Us H0mfGE9SUDCQPtOB9CsdmIqNXQSXqtk2STgfdLq2zy1jA2GglIWUHV56l/JV3HdJx7KW 0gpUjVZC7LiMswUWiRs0UQ0sz+TZFqAjTsv47C7qtJUD4UNlcuSsuEV8pvrW3M1rX8LH ctbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kJsrYm2uvyrl1AjemrWrKMybiEz+usep4Zai6FZc+2s=; b=5Z/vtmaMOl3OGpiQVkPmIJ/g4mI9zUZPTGzNpqm7aZvabSRq+VzVpKYCsRN/8g+4T3 Di3Bjh9Q0VocUleCxNdNwK1g7fQfphq05e3egetXn7KW3Cfz2XAjmf0/04ddU2O22b20 0ZnDJjow9x00IHBi1uN9x68JG4hyTvN3bWujKFf2VcgAxnOaFrXH52F3j97HvUHMzvvu vMdKqbYY8Ohw1dAJkxkNepYMyMB7SpTF7woJzWkuS4Z+s+5xDSHbK+iaWKMMuSw8Xe6F wxTW4zycRx+K7YhocPfGYtRkJSRFHiah9/8WQcmyTo1uPHpU/0Q8RWa0YgUIC7+uJaIc 3cmQ== X-Gm-Message-State: AOAM532uzGSUzw3q1AwuJKA557kzZPt2S7BxJ6zSZD2J2sZKdi3F8k1S Nd+cbSty8mYZK6Yt5sdFmVFBTvAdUwtaZA== X-Google-Smtp-Source: ABdhPJzp9/lGRRRL+Bt8j6FxUcqWb1lx4IO+orEV/+S0PXx5KnvykczlkNw4CReybDROh5USJML9Cg== X-Received: by 2002:a17:906:b348:b0:6cf:5b66:2f80 with SMTP id cd8-20020a170906b34800b006cf5b662f80mr19618521ejb.638.1645553539543; Tue, 22 Feb 2022 10:12:19 -0800 (PST) Received: from DESKTOP-U5LNN3J.localdomain (89-79-181-52.dynamic.chello.pl. [89.79.181.52]) by smtp.gmail.com with ESMTPSA id y21sm10610153eda.38.2022.02.22.10.12.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Feb 2022 10:12:18 -0800 (PST) From: Michal Krawczyk To: dev@dpdk.org Cc: shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Dawid Gorecki Subject: [PATCH v2 08/21] net/ena: perform Tx cleanup before sending pkts Date: Tue, 22 Feb 2022 19:11:33 +0100 Message-Id: <20220222181146.28882-9-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220222181146.28882-1-mk@semihalf.com> References: <20220222160634.24489-1-mk@semihalf.com> <20220222181146.28882-1-mk@semihalf.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To increase likehood that current burst will fit in the HW rings, perform Tx cleanup before pushing packets to the HW. It may increase latency a bit for sparse bursts, but the Tx flow now should be more smooth. It's also common order in the Tx burst function for other PMDs. Signed-off-by: Michal Krawczyk Reviewed-by: Dawid Gorecki Reviewed-by: Shai Brandes --- drivers/net/ena/ena_ethdev.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 4b82372155..ed3dd162ba 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2776,6 +2776,10 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } #endif + available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); + if (available_desc < tx_ring->tx_free_thresh) + ena_tx_cleanup(tx_ring); + for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx])) break; @@ -2784,9 +2788,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->size_mask)]); } - available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); - tx_ring->tx_stats.available_desc = available_desc; - /* If there are ready packets to be xmitted... */ if (likely(tx_ring->pkts_without_db)) { /* ...let HW do its best :-) */ @@ -2795,9 +2796,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_ring->pkts_without_db = false; } - if (available_desc < tx_ring->tx_free_thresh) - ena_tx_cleanup(tx_ring); - tx_ring->tx_stats.available_desc = ena_com_free_q_entries(tx_ring->ena_com_io_sq); tx_ring->tx_stats.tx_poll++;