From patchwork Wed Apr 8 08:29:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67992 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 036F5A0597; Wed, 8 Apr 2020 10:33:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F3C51C1E6; Wed, 8 Apr 2020 10:30:05 +0200 (CEST) Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by dpdk.org (Postfix) with ESMTP id 7AEC21C1C9 for ; Wed, 8 Apr 2020 10:29:59 +0200 (CEST) Received: by mail-lf1-f66.google.com with SMTP id z23so4478487lfh.8 for ; Wed, 08 Apr 2020 01:29:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BslQqR8WPPTfYqe91WVQAV1nqvQPKZkv3Be+1cG2SRg=; b=xV0KTD97rGnZIzyLtpI/IzkU/qYRwF3dUZY5v8r/uCGWlAT+8ppkyFaKk+MvZnN7Cl leIxLnRj3CejW3UKWOOQImG9StMx8V3RNphxnx32TXs2GSwli6s2B0+kaVP+brX0FnMx M6B/L6xHGZ8OhwmAs/r6JlhRKFLslweElYEds0lzHuKOyiRRdse03z84C1i+zVNCys1V 3G+cryUybY9fBYALxSyNqNaED17o1VzmYv5ybT1up5mQ31DgScpv6roW6FadFTYWv3Y0 +gj2AhIkU8Nc+pYIouXBeqNUzUTNSu7byLV3pgVzh/HrT4Ud5cWx6OitmDglINGnwLT2 Bt5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BslQqR8WPPTfYqe91WVQAV1nqvQPKZkv3Be+1cG2SRg=; b=IbFWrj+JoppqWCP1u2Y4YNH3fDp3PKgVsB3Wa2LhVY+FtPvkCuTp3O1mhvb2dyUYhR ZIU7/4hWU7mbLFllBnS7YVaMSN23pLmroOR1Vxh3gmGLVlw50N488CCwFhuhqE8aFLNA Y2cySUxrpzge3mJIz6jQr4lg9ZXSqugZRPeGsT4Miyzj8eK8E5fkGtZa9o/1Jonha08j 5YChH9ndIEpODdHClnPVzHmOMh8ojd01xYljdELxum2RNwJZsK/qS/O6yAfEQolK0kBo lMnfMEGhzXrqZIwx4C13cCF0DZ56o1ge4LS8jU8p988+sus4nQPY6emH/1xKzau5Hik7 Xxfg== X-Gm-Message-State: AGi0PuYDQjsRsdVItnlvpr2OYo/+LeVPBNLirZSDjWXmCxDYJP0DQUCJ xMakMYyD/UXv/M/MKNeKsDPHTyItBBA= X-Google-Smtp-Source: APiQypK+KZdP7WAgZ+8UC/RM7g8U9HrfilUos1ZLM2hLuHTNJ5GKcMuG/RCo+e0P5DqHcH10XrBtFw== X-Received: by 2002:ac2:4112:: with SMTP id b18mr3871389lfi.106.1586334598731; Wed, 08 Apr 2020 01:29:58 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:57 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:17 +0200 Message-Id: <20200408082921.31000-27-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 26/30] net/ena: use macros for ring idx operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To improve code readability, abstraction was added for operating on IO rings indexes. Driver was defining local variable for ring mask in each function that needed to operate on the ring indexes. Now it is being stored in the ring as this value won't change unless size of the ring will change and macros for advancing indexes using the mask has been added. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 53 ++++++++++++++++++------------------ drivers/net/ena/ena_ethdev.h | 4 +++ 2 files changed, 30 insertions(+), 27 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 7804a5c85d..f6d0a75819 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1266,6 +1266,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->next_to_clean = 0; txq->next_to_use = 0; txq->ring_size = nb_desc; + txq->size_mask = nb_desc - 1; txq->numa_socket_id = socket_id; txq->tx_buffer_info = rte_zmalloc("txq->tx_buffer_info", @@ -1361,6 +1362,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->next_to_clean = 0; rxq->next_to_use = 0; rxq->ring_size = nb_desc; + rxq->size_mask = nb_desc - 1; rxq->numa_socket_id = socket_id; rxq->mb_pool = mp; @@ -1409,8 +1411,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) { unsigned int i; int rc; - uint16_t ring_size = rxq->ring_size; - uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; uint16_t in_use, req_id; struct rte_mbuf **mbufs = rxq->rx_refill_buffer; @@ -1418,9 +1418,10 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(!count)) return 0; - in_use = ring_size - ena_com_free_q_entries(rxq->ena_com_io_sq) - 1; - - ena_assert_msg(((in_use + count) < ring_size), "bad ring state\n"); + in_use = rxq->ring_size - 1 - + ena_com_free_q_entries(rxq->ena_com_io_sq); + ena_assert_msg(((in_use + count) < rxq->ring_size), + "bad ring state\n"); /* get resources for incoming packets */ rc = rte_mempool_get_bulk(rxq->mb_pool, (void **)mbufs, count); @@ -1432,7 +1433,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } for (i = 0; i < count; i++) { - uint16_t next_to_use_masked = next_to_use & ring_mask; struct rte_mbuf *mbuf = mbufs[i]; struct ena_com_buf ebuf; struct ena_rx_buffer *rx_info; @@ -1440,7 +1440,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (likely((i + 4) < count)) rte_prefetch0(mbufs[i + 4]); - req_id = rxq->empty_rx_reqs[next_to_use_masked]; + req_id = rxq->empty_rx_reqs[next_to_use]; rc = validate_rx_req_id(rxq, req_id); if (unlikely(rc)) break; @@ -1458,7 +1458,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) break; } rx_info->mbuf = mbuf; - next_to_use++; + next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, rxq->size_mask); } if (unlikely(i < count)) { @@ -2072,7 +2072,6 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, struct rte_mbuf *mbuf; struct rte_mbuf *mbuf_head; struct ena_rx_buffer *rx_info; - unsigned int ring_mask = rx_ring->ring_size - 1; uint16_t ntc, len, req_id, buf = 0; if (unlikely(descs == 0)) @@ -2100,8 +2099,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, mbuf_head->data_off += offset; rx_info->mbuf = NULL; - rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; - ++ntc; + rx_ring->empty_rx_reqs[ntc] = req_id; + ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); while (--descs) { ++buf; @@ -2123,8 +2122,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, mbuf_head->pkt_len += len; rx_info->mbuf = NULL; - rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; - ++ntc; + rx_ring->empty_rx_reqs[ntc] = req_id; + ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); } *next_to_clean = ntc; @@ -2136,8 +2135,6 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); - unsigned int ring_size = rx_ring->ring_size; - unsigned int ring_mask = ring_size - 1; unsigned int free_queue_entries; unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; @@ -2154,7 +2151,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; } - descs_in_use = ring_size - + descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; nb_pkts = RTE_MIN(descs_in_use, nb_pkts); @@ -2183,9 +2180,10 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, ena_rx_ctx.pkt_offset); if (unlikely(mbuf == NULL)) { for (i = 0; i < ena_rx_ctx.descs; ++i) { - rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = + rx_ring->empty_rx_reqs[next_to_clean] = rx_ring->ena_bufs[i].req_id; - ++next_to_clean; + next_to_clean = ENA_IDX_NEXT_MASKED( + next_to_clean, rx_ring->size_mask); } break; } @@ -2210,7 +2208,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); refill_threshold = - RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + RTE_MIN(rx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, (unsigned int)ENA_REFILL_THRESH_PACKET); /* Burst refill to save doorbells, memory barriers, const interval */ @@ -2353,8 +2351,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t next_to_clean = tx_ring->next_to_clean; struct rte_mbuf *mbuf; uint16_t seg_len; - unsigned int ring_size = tx_ring->ring_size; - unsigned int ring_mask = ring_size - 1; unsigned int cleanup_budget; struct ena_com_tx_ctx ena_tx_ctx; struct ena_tx_buffer *tx_info; @@ -2384,7 +2380,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (unlikely(rc)) break; - req_id = tx_ring->empty_tx_reqs[next_to_use & ring_mask]; + req_id = tx_ring->empty_tx_reqs[next_to_use]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->mbuf = mbuf; tx_info->num_of_bufs = 0; @@ -2428,7 +2424,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, tx_ring->disable_meta_caching); - rte_prefetch0(tx_pkts[(sent_idx + 4) & ring_mask]); + rte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED( + sent_idx, 4, tx_ring->size_mask)]); /* Process first segment taking into * consideration pushed header @@ -2480,7 +2477,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } tx_info->tx_descs = nb_hw_desc; - next_to_use++; + next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, + tx_ring->size_mask); tx_ring->tx_stats.cnt++; tx_ring->tx_stats.bytes += total_length; } @@ -2511,10 +2509,11 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_info->mbuf = NULL; /* Put back descriptor to the ring for reuse */ - tx_ring->empty_tx_reqs[next_to_clean & ring_mask] = req_id; - next_to_clean++; + tx_ring->empty_tx_reqs[next_to_clean] = req_id; + next_to_clean = ENA_IDX_NEXT_MASKED(next_to_clean, + tx_ring->size_mask); cleanup_budget = - RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, (unsigned int)ENA_REFILL_THRESH_PACKET); /* If too many descs to clean, leave it for another run */ diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 13d87d48f0..6e24a4e582 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -40,6 +40,9 @@ #define ENA_REFILL_THRESH_DIVIDER 8 #define ENA_REFILL_THRESH_PACKET 256 +#define ENA_IDX_NEXT_MASKED(idx, mask) (((idx) + 1) & (mask)) +#define ENA_IDX_ADD_MASKED(idx, n, mask) (((idx) + (n)) & (mask)) + struct ena_adapter; enum ena_ring_type { @@ -109,6 +112,7 @@ struct ena_ring { }; struct rte_mbuf **rx_refill_buffer; unsigned int ring_size; /* number of tx/rx_buffer_info's entries */ + unsigned int size_mask; struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq;