net/mlx5: fix minimum number of Multi-Packet RQ buffers

Message ID 20180802210007.10671-1-yskoh@mellanox.com (mailing list archive)
State Accepted, archived
Delegated to: Shahaf Shuler
Headers
Series net/mlx5: fix minimum number of Multi-Packet RQ buffers |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Yongseok Koh Aug. 2, 2018, 9 p.m. UTC
  If MPRQ is enabled, a PMD-private mempool is allocated. For ConnectX-4 Lx,
the minimum number of strides is 512 which ConnectX-5 supports 8. This
results in quite small number of elements for the MPRQ mempool. For
example, if the size of Rx ring is configured as 512, only one MPRQ buffer
can cover the whole ring. If there's only one Rx queue is configured. In
the following code in mlx5_mprq_alloc_mp(), desc is 1 and obj_num will be
36 as a result.

	desc *= 4;
	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * priv->rxqs_n;

However, rte_mempool_create_empty() has a sanity check to refuse large
per-lcore cache size compared to the number of elements. Cache flush
threshold should not exceed the number of elements of a mempool. For the
above example, the threshold is 32 * 1.5 = 48 which is larger than 36 and
it fails to create the mempool.

Fixes: 7d6bf6b866b8 ("net/mlx5: add Multi-Packet Rx support")
Cc: stable@dpdk.org

Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
 drivers/net/mlx5/mlx5_defs.h | 2 +-
 drivers/net/mlx5/mlx5_rxq.c  | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)
  

Comments

Shahaf Shuler Aug. 5, 2018, 11:33 a.m. UTC | #1
Friday, August 3, 2018 12:00 AM, Yongseok Koh:
> Subject: [PATCH] net/mlx5: fix minimum number of Multi-Packet RQ buffers
> 
> If MPRQ is enabled, a PMD-private mempool is allocated. For ConnectX-4 Lx,
> the minimum number of strides is 512 which ConnectX-5 supports 8. This
> results in quite small number of elements for the MPRQ mempool. For
> example, if the size of Rx ring is configured as 512, only one MPRQ buffer can
> cover the whole ring. If there's only one Rx queue is configured. In the
> following code in mlx5_mprq_alloc_mp(), desc is 1 and obj_num will be
> 36 as a result.
> 
> 	desc *= 4;
> 	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * priv->rxqs_n;
> 
> However, rte_mempool_create_empty() has a sanity check to refuse large
> per-lcore cache size compared to the number of elements. Cache flush
> threshold should not exceed the number of elements of a mempool. For the
> above example, the threshold is 32 * 1.5 = 48 which is larger than 36 and it
> fails to create the mempool.
> 
> Fixes: 7d6bf6b866b8 ("net/mlx5: add Multi-Packet Rx support")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>

Applied to next-net-mlx, thanks.
  

Patch

diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 439cc159f6..f2a1679511 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -126,7 +126,7 @@ 
 #define MLX5_MPRQ_MIN_RXQS 12
 
 /* Cache size of mempool for Multi-Packet RQ. */
-#define MLX5_MPRQ_MP_CACHE_SZ 32
+#define MLX5_MPRQ_MP_CACHE_SZ 32U
 
 /* Definition of static_assert found in /usr/include/assert.h */
 #ifndef HAVE_STATIC_ASSERT
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 97b3e8ef0c..f785cdcebf 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1234,6 +1234,13 @@  mlx5_mprq_alloc_mp(struct rte_eth_dev *dev)
 	 */
 	desc *= 4;
 	obj_num = desc + MLX5_MPRQ_MP_CACHE_SZ * priv->rxqs_n;
+	/*
+	 * rte_mempool_create_empty() has sanity check to refuse large cache
+	 * size compared to the number of elemenets.
+	 * CACHE_FLUSHTHRESH_MULTIPLIER is defined in a C file, so using a
+	 * constant number 2 instead.
+	 */
+	obj_num = RTE_MAX(obj_num, MLX5_MPRQ_MP_CACHE_SZ * 2);
 	/* Check a mempool is already allocated and if it can be resued. */
 	if (mp != NULL && mp->elt_size >= obj_size && mp->size >= obj_num) {
 		DRV_LOG(DEBUG, "port %u mempool %s is being reused",