From patchwork Fri Dec 22 21:56:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135537 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E67543762; Fri, 22 Dec 2023 22:58:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1EEBB42E6B; Fri, 22 Dec 2023 22:57:19 +0100 (CET) Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by mails.dpdk.org (Postfix) with ESMTP id 9061140A6B for ; Fri, 22 Dec 2023 22:57:15 +0100 (CET) Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1d3d0faf262so18506195ad.3 for ; Fri, 22 Dec 2023 13:57:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703282234; x=1703887034; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=XhlWdlbvdum+JfIHvqD+J7oEyYJKkAEdQQvt7ecGqnU=; b=Gyg6ey217vKPSmNZzstVKOpb1xMDupd9lwj5vTEQ/UyKro8KluMrOqpwLPLxl+ms/Q XWxMRU4TP+mW/ltaVRrdd83IUB98d+4bZYOOQOJFql2/61Suxl8RMUTZur6UJ2lb4Wk1 AGsZ3HYvZhYXa2YGtaLbLgNITAHWv2gO8ezV8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703282234; x=1703887034; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XhlWdlbvdum+JfIHvqD+J7oEyYJKkAEdQQvt7ecGqnU=; b=TeIj7omR6jailjAC3cy3ZzWAunY6pZ9wRq0mOCT1vQTVOF+SV6pMQeAZCDWpOmgISw skKDTtKW/y/g3dq/D5Vm1URRUIxbe6ZEYiFOhKBkOcd+ioCNV+lihO1zPIa+JpLBgg7V tUcZHYt7JqfJQtvlksUZ0oEvP0db5yBgPKGOZjrhuSutrwGeiQUjphf6bPySTbN4y58D hwf9/v2CRIIAQtDYEajHspr6vgkoPNrgGQyMZ3D9kFwcDnemlkN6VzU4DnrM3iAveeWo YcCc+1mOTBN/f7YINIgU/Dskk+Qnci2gaxqZX6eT3XETm4jVo/lP9L/m47jgMU1W2Xwl 1V6w== X-Gm-Message-State: AOJu0YzQhGnRytmj98FV5HGseGL4MasYXUxkvLX4tRUEyMVf0JooBnbS UTyFp5qiyKS/ZrvnI53HJYdBxrF2IkIE/Pq7Y3YeVk2bxHSZ6bsukVyt2PyEsBAx2goWiYFEPbD yvOd9sUgCbIY+98S2ygU7FEBoik8qwyYMu+Ba7mMxYyXZAoAlBescUj+iSLkR4AXeTnUydYfq3T g= X-Google-Smtp-Source: AGHT+IHZejFO5XJQctD57FZFUtcxG3KRG2Om1V2XtunSncfvyeL/Sx0ggt7pctAE1VXBh6Odkfr1Tg== X-Received: by 2002:a17:902:db02:b0:1d3:bc96:6c13 with SMTP id m2-20020a170902db0200b001d3bc966c13mr2433671plx.35.1703282234059; Fri, 22 Dec 2023 13:57:14 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id sr5-20020a17090b4e8500b0028afd8b1e0bsm3540700pjb.57.2023.12.22.13.57.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 13:57:13 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur Subject: [PATCH v2 07/18] net/bnxt: reattempt mbuf allocation for Rx and AGG rings Date: Fri, 22 Dec 2023 13:56:48 -0800 Message-Id: <20231222215659.64993-8-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231222215659.64993-1-ajit.khaparde@broadcom.com> References: <20231222215659.64993-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Normally the PMD allocates a new mbuf for every mbuf consumed. In case of mbuf alloc failure, that slot in the Rx or AGG ring remains empty till a new mbuf is not allocated for that slot. If this happens too frequently the Rx ring or the aggregation ring could be completely drained of mbufs and can cause unexpected behavior. To prevent this, in case of an mbuf allocation failure, set a bit and try to reattempt mbuf allocation to fill the empty slots. Since this should not happen under normal circumstances, it should not impact regular Rx performance. The need_realloc bit is set in the RxQ if mbuf allocation fails for Rx ring or the AGG ring. As long as the application calls the Rx burst function even in cases where the Rx rings became completely empty, the logic should be able to reattempt buffer allocation for the associated Rx and aggregation rings. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_rxq.h | 1 + drivers/net/bnxt/bnxt_rxr.c | 101 ++++++++++++++++++++++-------------- 2 files changed, 64 insertions(+), 38 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index b9908be5f4..77bc382a1d 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -41,6 +41,7 @@ struct bnxt_rx_queue { struct bnxt_cp_ring_info *cp_ring; struct rte_mbuf fake_mbuf; uint64_t rx_mbuf_alloc_fail; + uint8_t need_realloc; const struct rte_memzone *mz; }; diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index b919922a64..c5c9f9e6e6 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -50,6 +50,8 @@ static inline int bnxt_alloc_rx_data(struct bnxt_rx_queue *rxq, mbuf = __bnxt_alloc_rx_data(rxq->mb_pool); if (!mbuf) { __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, __ATOMIC_RELAXED); + /* If buff has failed already, setting this again won't hurt */ + rxq->need_realloc = 1; return -ENOMEM; } @@ -85,6 +87,8 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq, mbuf = __bnxt_alloc_rx_data(rxq->mb_pool); if (!mbuf) { __atomic_fetch_add(&rxq->rx_mbuf_alloc_fail, 1, __ATOMIC_RELAXED); + /* If buff has failed already, setting this again won't hurt */ + rxq->need_realloc = 1; return -ENOMEM; } @@ -139,7 +143,6 @@ static void bnxt_rx_ring_reset(void *arg) int i, rc = 0; struct bnxt_rx_queue *rxq; - for (i = 0; i < (int)bp->rx_nr_rings; i++) { struct bnxt_rx_ring_info *rxr; @@ -357,7 +360,8 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, RTE_ASSERT(ag_cons <= rxr->ag_ring_struct->ring_mask); ag_buf = &rxr->ag_buf_ring[ag_cons]; ag_mbuf = *ag_buf; - RTE_ASSERT(ag_mbuf != NULL); + if (ag_mbuf == NULL) + return -EBUSY; ag_mbuf->data_len = rte_le_to_cpu_16(rxcmp->len); @@ -452,7 +456,7 @@ static inline struct rte_mbuf *bnxt_tpa_end( RTE_ASSERT(mbuf != NULL); if (agg_bufs) { - bnxt_rx_pages(rxq, mbuf, raw_cp_cons, agg_bufs, tpa_info); + (void)bnxt_rx_pages(rxq, mbuf, raw_cp_cons, agg_bufs, tpa_info); } mbuf->l4_len = payload_offset; @@ -1230,8 +1234,11 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, bnxt_set_mark_in_mbuf(rxq->bp, rxcmp1, mbuf); reuse_rx_mbuf: - if (agg_buf) - bnxt_rx_pages(rxq, mbuf, &tmp_raw_cons, agg_buf, NULL); + if (agg_buf) { + rc = bnxt_rx_pages(rxq, mbuf, &tmp_raw_cons, agg_buf, NULL); + if (rc != 0) + return -EBUSY; + } #ifdef BNXT_DEBUG if (rxcmp1->errors_v2 & RX_CMP_L2_ERRORS) { @@ -1293,6 +1300,48 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, return rc; } +static void bnxt_reattempt_buffer_alloc(struct bnxt_rx_queue *rxq) +{ + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring; + uint16_t raw_prod; + uint32_t cnt; + + /* Assume alloc passes. On failure, + * need_realloc will be set inside bnxt_alloc_XY_data. + */ + rxq->need_realloc = 0; + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + goto alloc_rx; + + raw_prod = rxr->ag_raw_prod; + bnxt_prod_ag_mbuf(rxq); + if (raw_prod != rxr->ag_raw_prod) + bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); + +alloc_rx: + raw_prod = rxr->rx_raw_prod; + ring = rxr->rx_ring_struct; + for (cnt = 0; cnt < ring->ring_size; cnt++) { + struct rte_mbuf **rx_buf; + uint16_t ndx; + + ndx = RING_IDX(ring, raw_prod + cnt); + rx_buf = &rxr->rx_buf_ring[ndx]; + + /* Buffer already allocated for this index. */ + if (*rx_buf != NULL && *rx_buf != &rxq->fake_mbuf) + continue; + + /* This slot is empty. Alloc buffer for Rx */ + if (bnxt_alloc_rx_data(rxq, rxr, raw_prod + cnt)) + break; + + rxr->rx_raw_prod = raw_prod + cnt; + bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); + } +} + uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -1302,7 +1351,6 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t rx_raw_prod = rxr->rx_raw_prod; uint16_t ag_raw_prod = rxr->ag_raw_prod; uint32_t raw_cons = cpr->cp_raw_cons; - bool alloc_failed = false; uint32_t cons; int nb_rx_pkts = 0; int nb_rep_rx_pkts = 0; @@ -1358,10 +1406,8 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, break; else if (rc == -ENODEV) /* completion for representor */ nb_rep_rx_pkts++; - else if (rc == -ENOMEM) { + else if (rc == -ENOMEM) nb_rx_pkts++; - alloc_failed = true; - } } else if (!BNXT_NUM_ASYNC_CPR(rxq->bp)) { evt = bnxt_event_hwrm_resp_handler(rxq->bp, @@ -1372,7 +1418,12 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } raw_cons = NEXT_RAW_CMP(raw_cons); - if (nb_rx_pkts == nb_pkts || nb_rep_rx_pkts == nb_pkts || evt) + /* + * The HW reposting may fall behind if mbuf allocation has + * failed. Break and reattempt allocation to prevent that. + */ + if (nb_rx_pkts == nb_pkts || nb_rep_rx_pkts == nb_pkts || evt || + rxq->need_realloc != 0) break; } @@ -1395,35 +1446,9 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, /* Ring the AGG ring DB */ if (ag_raw_prod != rxr->ag_raw_prod) bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); - - /* Attempt to alloc Rx buf in case of a previous allocation failure. */ - if (alloc_failed) { - int cnt; - - rx_raw_prod = RING_NEXT(rx_raw_prod); - for (cnt = 0; cnt < nb_rx_pkts + nb_rep_rx_pkts; cnt++) { - struct rte_mbuf **rx_buf; - uint16_t ndx; - - ndx = RING_IDX(rxr->rx_ring_struct, rx_raw_prod + cnt); - rx_buf = &rxr->rx_buf_ring[ndx]; - - /* Buffer already allocated for this index. */ - if (*rx_buf != NULL && *rx_buf != &rxq->fake_mbuf) - continue; - - /* This slot is empty. Alloc buffer for Rx */ - if (!bnxt_alloc_rx_data(rxq, rxr, rx_raw_prod + cnt)) { - rxr->rx_raw_prod = rx_raw_prod + cnt; - bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); - } else { - PMD_DRV_LOG(ERR, "Alloc mbuf failed\n"); - break; - } - } - } - done: + if (unlikely(rxq->need_realloc)) + bnxt_reattempt_buffer_alloc(rxq); return nb_rx_pkts; }