From patchwork Fri Mar 27 10:18:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67297 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 48CD8A0589; Fri, 27 Mar 2020 11:33:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8EB001C1B7; Fri, 27 Mar 2020 11:29:54 +0100 (CET) Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by dpdk.org (Postfix) with ESMTP id 7FE561C190 for ; Fri, 27 Mar 2020 11:29:41 +0100 (CET) Received: by mail-lf1-f68.google.com with SMTP id x200so251701lff.0 for ; Fri, 27 Mar 2020 03:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=m5bRWJyKzyZDeYZnV2EvDrTV1wRrEHg3qPZLjcnXX1A=; b=JxvHGgMSmAdozY8FtjOHb+UgrDVCpltMCQTfrmmanu6kdiTDp6XyZJOKlmHOMsE4bs lBzxkZKZ9MXrqEFmEW1WJRxozQzLrNzcnMER/L7UmZlmLDJHphmmatR5+ufitLAHKoef hekRlK8AgKXYyZpBQOqeMuFRN01Q6758ca1+A0EwsZnspei7oAYw2GLYk+teOsZf2ki8 RmB5R+OSK5aqlBNjCNZuON+/pVevvn4+Nc/DG4ExOWUUXjLUq6vfZ6MKTWM/xJgnFHtI b6HF/ZniP3WuJjY7pN+ZkTRkDWoPQNDmaqcUt5SOcxPZYhuRSqojAWydJ0kKDaQlxcYk 1Xhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=m5bRWJyKzyZDeYZnV2EvDrTV1wRrEHg3qPZLjcnXX1A=; b=ONcQmsTpmhYXcPen4SYZ9BwfTwIk1xkdmMtSxalIFucZWqS8Yg19JR8HAlfqWbpM/d +nmFxOhpU5Em3hrDNZ5G6gLZUh2wOJtPa+Yiz/M9GyC/gynfk0u5l/aaHF6bxKVQ65mN VReVNHRkDOx6UbcONK1mJKOqJeXItt7urlxVfIlceUxNf6e+tJWC8EuTQYvNJwKOO2oi SuuLHBSytLzHPMd5A5jw/Nlwg4UHs9IpvZClFumEYutV1E5BtvTxcYdAgO2ZyFXLLwgC 9Le9gOayTfvr8oK66BLd3hgMnL7bDqn5LDPSyehUrPHHVtL71ojp7x9cXHMN+UvkoOxP b5gQ== X-Gm-Message-State: ANhLgQ1btJiJiCoOUG7Ff/ecjIiGxj0hNKMRZeC/SaC3mc7XQIAL7C90 LSyZ3ke3h0bMqP4W6H4TcOWtK0DD/CzZAA== X-Google-Smtp-Source: ADFU+vtH+IYbzbFvdVhJLQeRNHnBwHNZAsYDA895uwgcwsdf37VTZEHzX1vIf6Ylnup2Hc5kWfEOXw== X-Received: by 2002:a19:6749:: with SMTP id e9mr8382533lfj.122.1585304980805; Fri, 27 Mar 2020 03:29:40 -0700 (PDT) Received: from localhost.localdomain (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id i11sm2789587lfo.84.2020.03.27.03.29.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Mar 2020 03:29:40 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Fri, 27 Mar 2020 11:18:17 +0100 Message-Id: <20200327101823.12646-24-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200327101823.12646-1-mk@semihalf.com> References: <20200327101823.12646-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 23/29] net/ena: limit refill threshold by fixed value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Divider used for both Tx and Rx cleanup/refill threshold can cause too big delay in case of the really big rings - for example if the 8k Rx ring will be used, the refill won't trigger unless 1024 threshold will be reached. It will also cause driver to try to allocate that much descriptors. Limiting it by fixed value - 256 in that case, would limit maximum time spent in repopulate function. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 27 ++++++++++++++------------- drivers/net/ena/ena_ethdev.h | 10 ++++++++++ 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 8f8a06d5ba..3e288d56c4 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -34,14 +34,6 @@ /*reverse version of ENA_IO_RXQ_IDX*/ #define ENA_IO_RXQ_IDX_REV(q) ((q - 1) / 2) -/* While processing submitted and completed descriptors (rx and tx path - * respectively) in a loop it is desired to: - * - perform batch submissions while populating sumbissmion queue - * - avoid blocking transmission of other packets during cleanup phase - * Hence the utilization ratio of 1/8 of a queue size. - */ -#define ENA_RING_DESCS_RATIO(ring_size) (ring_size / 8) - #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l) #define TEST_BIT(val, bit_shift) (val & (1UL << bit_shift)) @@ -2130,7 +2122,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int ring_size = rx_ring->ring_size; unsigned int ring_mask = ring_size - 1; - unsigned int refill_required; + unsigned int free_queue_entries; + unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t descs_in_use; struct rte_mbuf *mbuf; @@ -2199,11 +2192,15 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->rx_stats.cnt += completed; rx_ring->next_to_clean = next_to_clean; - refill_required = ena_com_free_q_entries(rx_ring->ena_com_io_sq); + free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); + refill_threshold = + RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); + /* Burst refill to save doorbells, memory barriers, const interval */ - if (refill_required > ENA_RING_DESCS_RATIO(ring_size)) { + if (free_queue_entries > refill_threshold) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); - ena_populate_rx_queue(rx_ring, refill_required); + ena_populate_rx_queue(rx_ring, free_queue_entries); } return completed; @@ -2342,6 +2339,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t seg_len; unsigned int ring_size = tx_ring->ring_size; unsigned int ring_mask = ring_size - 1; + unsigned int cleanup_budget; struct ena_com_tx_ctx ena_tx_ctx; struct ena_tx_buffer *tx_info; struct ena_com_buf *ebuf; @@ -2499,9 +2497,12 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Put back descriptor to the ring for reuse */ tx_ring->empty_tx_reqs[next_to_clean & ring_mask] = req_id; next_to_clean++; + cleanup_budget = + RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); /* If too many descs to clean, leave it for another run */ - if (unlikely(total_tx_descs > ENA_RING_DESCS_RATIO(ring_size))) + if (unlikely(total_tx_descs > cleanup_budget)) break; } tx_ring->tx_stats.available_desc = diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 64d93dac02..6634d0134f 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -30,6 +30,16 @@ #define ENA_WD_TIMEOUT_SEC 3 #define ENA_DEVICE_KALIVE_TIMEOUT (ENA_WD_TIMEOUT_SEC * rte_get_timer_hz()) +/* While processing submitted and completed descriptors (rx and tx path + * respectively) in a loop it is desired to: + * - perform batch submissions while populating sumbissmion queue + * - avoid blocking transmission of other packets during cleanup phase + * Hence the utilization ratio of 1/8 of a queue size or max value if the size + * of the ring is very big - like 8k Rx rings. + */ +#define ENA_REFILL_THRESH_DIVIDER 8 +#define ENA_REFILL_THRESH_PACKET 256 + struct ena_adapter; enum ena_ring_type {