From patchwork Sat Nov 21 03:42:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kozyrev X-Patchwork-Id: 84445 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 73825A04E7; Sat, 21 Nov 2020 04:42:47 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2D24FDED; Sat, 21 Nov 2020 04:42:45 +0100 (CET) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id ED3292AB for ; Sat, 21 Nov 2020 04:42:42 +0100 (CET) Received: from Internal Mail-Server by MTLPINE1 (envelope-from akozyrev@nvidia.com) with SMTP; 21 Nov 2020 05:42:41 +0200 Received: from nvidia.com (pegasus02.mtr.labs.mlnx [10.210.16.122]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 0AL3geEN011865; Sat, 21 Nov 2020 05:42:40 +0200 From: Alexander Kozyrev To: dev@dpdk.org Cc: rasland@nvidia.com, viacheslavo@nvidia.com, matan@nvidia.com Date: Sat, 21 Nov 2020 03:42:39 +0000 Message-Id: <20201121034239.13404-1-akozyrev@nvidia.com> X-Mailer: git-send-email 2.24.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH] net/mlx5: fix mbufs overflow in vectorized MPRQ X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Changing the allocation scheme to improve mbufs locality caused mbufs overrun in some cases. Revert the previous replenish logic back. Calculate a number of unused mbufs and replenish max this number of mbufs. Mark the last 4 mbufs as fake mbufs to prevent overflowing into consumed mbufs in the future. Keep the consumed index and the produced index 4 mbufs apart for this purpose. Replenish some mbufs only in case the consumed index is within the replenish threshold of the produced index in order to retain the cache locality for the vectorized MPRQ routine. Fixes: 5c68764377 ("net/mlx5: improve vectorized MPRQ descriptors locality") Signed-off-by: Alexander Kozyrev Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_rxtx_vec.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 68c51dce31..028e0f6121 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -145,22 +145,29 @@ mlx5_rx_mprq_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq) const uint32_t strd_n = 1 << rxq->strd_num_n; const uint32_t elts_n = wqe_n * strd_n; const uint32_t wqe_mask = elts_n - 1; - uint32_t n = rxq->elts_ci - rxq->rq_pi; + uint32_t n = elts_n - (rxq->elts_ci - rxq->rq_pi); uint32_t elts_idx = rxq->elts_ci & wqe_mask; struct rte_mbuf **elts = &(*rxq->elts)[elts_idx]; + unsigned int i; - if (n <= rxq->rq_repl_thresh) { - MLX5_ASSERT(n + MLX5_VPMD_RX_MAX_BURST >= - MLX5_VPMD_RXQ_RPLNSH_THRESH(elts_n)); + if (n >= rxq->rq_repl_thresh && + rxq->elts_ci - rxq->rq_pi <= rxq->rq_repl_thresh) { + MLX5_ASSERT(n >= MLX5_VPMD_RXQ_RPLNSH_THRESH(elts_n)); MLX5_ASSERT(MLX5_VPMD_RXQ_RPLNSH_THRESH(elts_n) > MLX5_VPMD_DESCS_PER_LOOP); /* Not to cross queue end. */ - n = RTE_MIN(n + MLX5_VPMD_RX_MAX_BURST, elts_n - elts_idx); + n = RTE_MIN(n - MLX5_VPMD_DESCS_PER_LOOP, elts_n - elts_idx); + /* Limit replenish number to threshold value. */ + n = RTE_MIN(n, rxq->rq_repl_thresh); if (rte_mempool_get_bulk(rxq->mp, (void *)elts, n) < 0) { rxq->stats.rx_nombuf += n; return; } rxq->elts_ci += n; + /* Prevent overflowing into consumed mbufs. */ + elts_idx = rxq->elts_ci & wqe_mask; + for (i = 0; i < MLX5_VPMD_DESCS_PER_LOOP; ++i) + (*rxq->elts)[elts_idx + i] = &rxq->fake_mbuf; } }