From patchwork Fri Mar 27 10:18:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67298 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1ABA1A0589; Fri, 27 Mar 2020 11:33:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 007031C1BD; Fri, 27 Mar 2020 11:29:56 +0100 (CET) Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by dpdk.org (Postfix) with ESMTP id D823E1C196 for ; Fri, 27 Mar 2020 11:29:42 +0100 (CET) Received: by mail-lf1-f66.google.com with SMTP id e7so7416204lfq.1 for ; Fri, 27 Mar 2020 03:29:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fNDYj0oAiCnXGLyhxG2WuLxOXkNp1/Qt77ATdBMWtTo=; b=X7pgLn895PCvrjQn6GjJsS+Blb1Ct8+tagc0VFpP+SPYsCrJdXqY7YS05NkXre3/oe FOqpJm+oLOu24FLPTDKTUHYlL9v4DUez78IfC4bQyCTKurj2QY//MCGJ/M15f+ig+cyI XM8gy8WQryVayOGV6i2H6bt9Gr87lGoK0/73H+A2EnqcwNwWOqZoUAdpNCVzULz34hdh 7GycQwuRR7SG2L06exCHmUTHETZEyhmt3Zmusxdr9cghpRTF9xUYe9HLeT32cTHSUtzo +LgTCoeVxSa0cj3XtXn2OqfWk4Pb5HN4voIPbsPePrYDYM9z5tW3xromJLV9pHBJM1Qd QcVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fNDYj0oAiCnXGLyhxG2WuLxOXkNp1/Qt77ATdBMWtTo=; b=XZrg+HpB6tSh9naA1LKmwcep211Fb6cBjKgWl3S9Q+94v/1++MgGBcvnPsTHWkEeny wgD2G68OVvQaKMS02FofrIFKBGlwaokDw8dUDMPVzLXvuVudeJ2EvQGgc4RkOsgAnbkr xVRlOD/YpHaQUWevXpHffSieZgLl4ZcSaaSyX85yvsshIPqrBFmTcJb3qJjslTyFOnoO vzSIc49WswQrDbM5ArecIBWsVJyK56peH4iUCtOkR9kTpRgr0nXlqsuOdxqqUY49Ww+I /e5tRz33ppRA97OKwsptirTkqsV7ZMhB/p6Z1f08UqWF/DHCFJqpaTUL16WDsavo5cmV gPig== X-Gm-Message-State: ANhLgQ0lh9FIHcckXxAmbF562kh9Q1TDT2nylHm0lHWvmsxxdaXkxern 67j4GzpngKMGuGOVJanFp5i2XDg3sSYgYg== X-Google-Smtp-Source: ADFU+vsE/2alTrGFoEZyHSHq6agebbgj6wuk0UjA2+djztBJbpxwbIF6RvPBvhMxmWs+9vJwnLs6Fw== X-Received: by 2002:a05:6512:31d3:: with SMTP id j19mr8774909lfe.178.1585304982024; Fri, 27 Mar 2020 03:29:42 -0700 (PDT) Received: from localhost.localdomain (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id i11sm2789587lfo.84.2020.03.27.03.29.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Mar 2020 03:29:41 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Fri, 27 Mar 2020 11:18:18 +0100 Message-Id: <20200327101823.12646-25-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200327101823.12646-1-mk@semihalf.com> References: <20200327101823.12646-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 24/29] net/ena: use macros for ring idx operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To improve code readability, abstraction was added for operating on IO rings indexes. Driver was defining local variable for ring mask in each function that needed to operate on the ring indexes. Now it is being stored in the ring as this value won't change unless size of the ring will change and macros for advancing indexes using the mask has been added. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 53 ++++++++++++++++++------------------ drivers/net/ena/ena_ethdev.h | 4 +++ 2 files changed, 30 insertions(+), 27 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 3e288d56c4..8ecbda4f76 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1253,6 +1253,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->next_to_clean = 0; txq->next_to_use = 0; txq->ring_size = nb_desc; + txq->size_mask = nb_desc - 1; txq->numa_socket_id = socket_id; txq->tx_buffer_info = rte_zmalloc("txq->tx_buffer_info", @@ -1350,6 +1351,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->next_to_clean = 0; rxq->next_to_use = 0; rxq->ring_size = nb_desc; + rxq->size_mask = nb_desc - 1; rxq->numa_socket_id = socket_id; rxq->mb_pool = mp; @@ -1398,8 +1400,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) { unsigned int i; int rc; - uint16_t ring_size = rxq->ring_size; - uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; uint16_t in_use, req_id; struct rte_mbuf **mbufs = rxq->rx_refill_buffer; @@ -1407,9 +1407,10 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(!count)) return 0; - in_use = ring_size - ena_com_free_q_entries(rxq->ena_com_io_sq) - 1; - - ena_assert_msg(((in_use + count) < ring_size), "bad ring state\n"); + in_use = rxq->ring_size - 1 - + ena_com_free_q_entries(rxq->ena_com_io_sq); + ena_assert_msg(((in_use + count) < rxq->ring_size), + "bad ring state\n"); /* get resources for incoming packets */ rc = rte_mempool_get_bulk(rxq->mb_pool, (void **)mbufs, count); @@ -1421,7 +1422,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } for (i = 0; i < count; i++) { - uint16_t next_to_use_masked = next_to_use & ring_mask; struct rte_mbuf *mbuf = mbufs[i]; struct ena_com_buf ebuf; struct ena_rx_buffer *rx_info; @@ -1429,7 +1429,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (likely((i + 4) < count)) rte_prefetch0(mbufs[i + 4]); - req_id = rxq->empty_rx_reqs[next_to_use_masked]; + req_id = rxq->empty_rx_reqs[next_to_use]; rc = validate_rx_req_id(rxq, req_id); if (unlikely(rc)) break; @@ -1447,7 +1447,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) break; } rx_info->mbuf = mbuf; - next_to_use++; + next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, rxq->size_mask); } if (unlikely(i < count)) { @@ -2056,7 +2056,6 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, struct rte_mbuf *mbuf; struct rte_mbuf *mbuf_head; struct ena_rx_buffer *rx_info; - unsigned int ring_mask = rx_ring->ring_size - 1; uint16_t ntc, len, req_id, buf = 0; if (unlikely(descs == 0)) @@ -2084,8 +2083,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, mbuf_head->data_off += offset; rx_info->mbuf = NULL; - rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; - ++ntc; + rx_ring->empty_rx_reqs[ntc] = req_id; + ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); while (--descs) { ++buf; @@ -2107,8 +2106,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, mbuf_head->pkt_len += len; rx_info->mbuf = NULL; - rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; - ++ntc; + rx_ring->empty_rx_reqs[ntc] = req_id; + ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); } *next_to_clean = ntc; @@ -2120,8 +2119,6 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); - unsigned int ring_size = rx_ring->ring_size; - unsigned int ring_mask = ring_size - 1; unsigned int free_queue_entries; unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; @@ -2138,7 +2135,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; } - descs_in_use = ring_size - + descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; nb_pkts = RTE_MIN(descs_in_use, nb_pkts); @@ -2167,9 +2164,10 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, ena_rx_ctx.pkt_offset); if (unlikely(mbuf == NULL)) { for (i = 0; i < ena_rx_ctx.descs; ++i) { - rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = + rx_ring->empty_rx_reqs[next_to_clean] = rx_ring->ena_bufs[i].req_id; - ++next_to_clean; + next_to_clean = ENA_IDX_NEXT_MASKED( + next_to_clean, rx_ring->size_mask); } break; } @@ -2194,7 +2192,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); refill_threshold = - RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + RTE_MIN(rx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, (unsigned int)ENA_REFILL_THRESH_PACKET); /* Burst refill to save doorbells, memory barriers, const interval */ @@ -2337,8 +2335,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t next_to_clean = tx_ring->next_to_clean; struct rte_mbuf *mbuf; uint16_t seg_len; - unsigned int ring_size = tx_ring->ring_size; - unsigned int ring_mask = ring_size - 1; unsigned int cleanup_budget; struct ena_com_tx_ctx ena_tx_ctx; struct ena_tx_buffer *tx_info; @@ -2368,7 +2364,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (unlikely(rc)) break; - req_id = tx_ring->empty_tx_reqs[next_to_use & ring_mask]; + req_id = tx_ring->empty_tx_reqs[next_to_use]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->mbuf = mbuf; tx_info->num_of_bufs = 0; @@ -2412,7 +2408,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, tx_ring->disable_meta_caching); - rte_prefetch0(tx_pkts[(sent_idx + 4) & ring_mask]); + rte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED( + sent_idx, 4, tx_ring->size_mask)]); /* Process first segment taking into * consideration pushed header @@ -2464,7 +2461,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } tx_info->tx_descs = nb_hw_desc; - next_to_use++; + next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, + tx_ring->size_mask); tx_ring->tx_stats.cnt++; tx_ring->tx_stats.bytes += total_length; } @@ -2495,10 +2493,11 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_info->mbuf = NULL; /* Put back descriptor to the ring for reuse */ - tx_ring->empty_tx_reqs[next_to_clean & ring_mask] = req_id; - next_to_clean++; + tx_ring->empty_tx_reqs[next_to_clean] = req_id; + next_to_clean = ENA_IDX_NEXT_MASKED(next_to_clean, + tx_ring->size_mask); cleanup_budget = - RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, (unsigned int)ENA_REFILL_THRESH_PACKET); /* If too many descs to clean, leave it for another run */ diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 6634d0134f..db7f013de0 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -40,6 +40,9 @@ #define ENA_REFILL_THRESH_DIVIDER 8 #define ENA_REFILL_THRESH_PACKET 256 +#define ENA_IDX_NEXT_MASKED(idx, mask) (((idx) + 1) & (mask)) +#define ENA_IDX_ADD_MASKED(idx, n, mask) (((idx) + (n)) & (mask)) + struct ena_adapter; enum ena_ring_type { @@ -109,6 +112,7 @@ struct ena_ring { }; struct rte_mbuf **rx_refill_buffer; unsigned int ring_size; /* number of tx/rx_buffer_info's entries */ + unsigned int size_mask; struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq;