From patchwork Fri Mar 27 10:18:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67296 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94167A0589; Fri, 27 Mar 2020 11:32:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A9E5B1C138; Fri, 27 Mar 2020 11:29:52 +0100 (CET) Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by dpdk.org (Postfix) with ESMTP id 29B0D1C120 for ; Fri, 27 Mar 2020 11:29:40 +0100 (CET) Received: by mail-lf1-f67.google.com with SMTP id s1so7389882lfd.3 for ; Fri, 27 Mar 2020 03:29:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0VwbUT1ZbTEWjKtDuCyAK4rAvRqGwLZ9nKnH4CIJWPs=; b=XDQANbw7t7KM05QYwGCh9b91NYNP8X5GZ99clA8rE+aw8jFS+sKTFDo9nIpF6L3/6w spt8GEFPUVAZFPUuUoy/4XYSmW+MD3IsZmMTR4tigixOb35T86flmQka+fKpeXNU53Sl P9SArkxqEkxBBqwUsycgpXcFth2xJKawKILDEV1j8cNdrgW/RnsAaV2o01Er2q+M+Loq QRcneZIT2C46Y/GGkY7+9PGqyMkOzzQvEDM+wORRXWh3UFPP4PeBTkG0qHVnqfl/nK1l dsb5Vjnod5o9eZ61fJ5gm5XV+gZ/A4MQ1FVAPCJucXzU7B4csjHHEQaelTDtOEl/WKOB wB4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0VwbUT1ZbTEWjKtDuCyAK4rAvRqGwLZ9nKnH4CIJWPs=; b=lLyFWrjF0b/O7OYtpi7cPsO5BfsWSFqPL33X5a5ndajHnOHEmG0UqtM8rWPnobGUzj yXTYl0Bp+LJcWjgEEIkpPuQDv3VD170WtpapptTTlooDqji3OcyqfHxYQWsmJejnOp07 dFgocgFx6p04PosBu6X/9cu/1+3/gA5tKswj7b/tv6TwN/BaO4z29zQOatqOAIGJ53+N rsNIufaFlPa/JrbApP54qcJllPv2AFLAjGDMv6+qGadEX+yK1aAl88gBqSH7fMrtXQ7U k030O5u+5OLmbpZSsNLhnFC646fdXvUu0YtZHN1dxa329yFLw9/6Ilp1NVETH+1QW8tz yQtw== X-Gm-Message-State: ANhLgQ0SbJIsuqcnul5vQF6H5jLsWMPVcgGL4HUQ7dfMCwVVPCOdJKYU WiiEW2o0+qOtU8Bbv4jXtOUhaFZ2ESJ2Zg== X-Google-Smtp-Source: ADFU+vuD/wS4zU4ORBamHQn21d/wKX3p0aExUJxtMVlmPiR1EAO8oqNdmUwh2OA5rXpbp5DbyqjXRw== X-Received: by 2002:a05:6512:2e4:: with SMTP id m4mr9041811lfq.202.1585304979495; Fri, 27 Mar 2020 03:29:39 -0700 (PDT) Received: from localhost.localdomain (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id i11sm2789587lfo.84.2020.03.27.03.29.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Mar 2020 03:29:38 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Fri, 27 Mar 2020 11:18:16 +0100 Message-Id: <20200327101823.12646-23-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200327101823.12646-1-mk@semihalf.com> References: <20200327101823.12646-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 22/29] net/ena: rework getting number of available descs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ena_com API should be preferred for getting number of used/available descriptors unless extra calculation needs to be performed. Some helper variables were added for storing values that are later reused. Moreover, for limiting the value of sent/received packets to the number of available descriptors, the RTE_MIN is used instead of if function, which was doing similar thing but was less descriptive. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 40221eb6ab..8f8a06d5ba 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1415,7 +1415,8 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(!count)) return 0; - in_use = rxq->next_to_use - rxq->next_to_clean; + in_use = ring_size - ena_com_free_q_entries(rxq->ena_com_io_sq) - 1; + ena_assert_msg(((in_use + count) < ring_size), "bad ring state\n"); /* get resources for incoming packets */ @@ -2129,8 +2130,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int ring_size = rx_ring->ring_size; unsigned int ring_mask = ring_size - 1; + unsigned int refill_required; uint16_t next_to_clean = rx_ring->next_to_clean; - uint16_t desc_in_use = 0; + uint16_t descs_in_use; struct rte_mbuf *mbuf; uint16_t completed; struct ena_com_rx_ctx ena_rx_ctx; @@ -2143,9 +2145,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; } - desc_in_use = rx_ring->next_to_use - next_to_clean; - if (unlikely(nb_pkts > desc_in_use)) - nb_pkts = desc_in_use; + descs_in_use = ring_size - + ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; + nb_pkts = RTE_MIN(descs_in_use, nb_pkts); for (completed = 0; completed < nb_pkts; completed++) { ena_rx_ctx.max_bufs = rx_ring->sgl_size; @@ -2197,11 +2199,11 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->rx_stats.cnt += completed; rx_ring->next_to_clean = next_to_clean; - desc_in_use = desc_in_use - completed + 1; + refill_required = ena_com_free_q_entries(rx_ring->ena_com_io_sq); /* Burst refill to save doorbells, memory barriers, const interval */ - if (ring_size - desc_in_use > ENA_RING_DESCS_RATIO(ring_size)) { + if (refill_required > ENA_RING_DESCS_RATIO(ring_size)) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); - ena_populate_rx_queue(rx_ring, ring_size - desc_in_use); + ena_populate_rx_queue(rx_ring, refill_required); } return completed; @@ -2344,7 +2346,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, struct ena_tx_buffer *tx_info; struct ena_com_buf *ebuf; uint16_t rc, req_id, total_tx_descs = 0; - uint16_t sent_idx = 0, empty_tx_reqs; + uint16_t sent_idx = 0; uint16_t push_len = 0; uint16_t delta = 0; int nb_hw_desc; @@ -2357,9 +2359,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; } - empty_tx_reqs = ring_size - (next_to_use - next_to_clean); - if (nb_pkts > empty_tx_reqs) - nb_pkts = empty_tx_reqs; + nb_pkts = RTE_MIN(ena_com_free_q_entries(tx_ring->ena_com_io_sq), + nb_pkts); for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { mbuf = tx_pkts[sent_idx];