diff mbox series

net/mlx5: fix external buffer pool registration for Rx queue

Message ID 20210212110630.2605-1-viacheslavo@nvidia.com (mailing list archive)
State Accepted
Delegated to: Raslan Darawsheh
Headers show
Series net/mlx5: fix external buffer pool registration for Rx queue | expand

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-testing success Testing PASS
ci/iol-abi-testing success Testing PASS
ci/intel-Testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-broadcom-Functional fail Functional Testing issues

Commit Message

Slava Ovsiienko Feb. 12, 2021, 11:06 a.m. UTC
On Rx queue creation the mlx5 PMD registers the data buffers of the
specified pools for DMA operations. It scans the mem_list of the pools
and creates the MRs (DMA related NIC objects) for the chunks found.
If the pool is created with rte_pktmbuf_pool_create_extbuf() and
refers to the external attached buffers (whose are in the area of
application responsibility and it should explicitly register the
data buffer memory for DMA with rte_dev_dma_map() call) the chunks
contain the mbuf structures only, w/o any built-in data buffers.
Hence, DMA with mlx5 NIC never happens to this area and there is
no need to create MRs for these ones.

The extra not needed MRs were created for the pools with external
buffers causing MR cache load and performance was slightly affected.
The patch checks the mbuf pool type and skips MR creation for the
pools with external buffers.

Fixes: bdb8e5b1ea7b ("net/mlx5: allow allocated mbuf with external buffer")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_mr.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

Comments

Matan Azrad Feb. 14, 2021, 10:42 a.m. UTC | #1
From: Viacheslav Ovsiienko
> On Rx queue creation the mlx5 PMD registers the data buffers of the specified
> pools for DMA operations. It scans the mem_list of the pools and creates the
> MRs (DMA related NIC objects) for the chunks found.
> If the pool is created with rte_pktmbuf_pool_create_extbuf() and refers to the
> external attached buffers (whose are in the area of application responsibility
> and it should explicitly register the data buffer memory for DMA with
> rte_dev_dma_map() call) the chunks contain the mbuf structures only, w/o any
> built-in data buffers.
> Hence, DMA with mlx5 NIC never happens to this area and there is no need to
> create MRs for these ones.
> 
> The extra not needed MRs were created for the pools with external buffers
> causing MR cache load and performance was slightly affected.
> The patch checks the mbuf pool type and skips MR creation for the pools with
> external buffers.
> 
> Fixes: bdb8e5b1ea7b ("net/mlx5: allow allocated mbuf with external buffer")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

Good catch!
> ---
>  drivers/net/mlx5/mlx5_mr.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
Raslan Darawsheh Feb. 21, 2021, 8:14 a.m. UTC | #2
Hi,

> -----Original Message-----
> From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Sent: Friday, February 12, 2021 1:07 PM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; stable@dpdk.org
> Subject: [PATCH] net/mlx5: fix external buffer pool registration for Rx queue
> 
> On Rx queue creation the mlx5 PMD registers the data buffers of the
> specified pools for DMA operations. It scans the mem_list of the pools
> and creates the MRs (DMA related NIC objects) for the chunks found.
> If the pool is created with rte_pktmbuf_pool_create_extbuf() and
> refers to the external attached buffers (whose are in the area of
> application responsibility and it should explicitly register the
> data buffer memory for DMA with rte_dev_dma_map() call) the chunks
> contain the mbuf structures only, w/o any built-in data buffers.
> Hence, DMA with mlx5 NIC never happens to this area and there is
> no need to create MRs for these ones.
> 
> The extra not needed MRs were created for the pools with external
> buffers causing MR cache load and performance was slightly affected.
> The patch checks the mbuf pool type and skips MR creation for the
> pools with external buffers.
> 
> Fixes: bdb8e5b1ea7b ("net/mlx5: allow allocated mbuf with external buffer")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
diff mbox series

Patch

diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
index 8b20ee3f83..da4e91fc24 100644
--- a/drivers/net/mlx5/mlx5_mr.c
+++ b/drivers/net/mlx5/mlx5_mr.c
@@ -535,7 +535,18 @@  mlx5_mr_update_mp(struct rte_eth_dev *dev, struct mlx5_mr_ctrl *mr_ctrl,
 		.mr_ctrl = mr_ctrl,
 		.ret = 0,
 	};
+	uint32_t flags = rte_pktmbuf_priv_flags(mp);
 
+	if (flags & RTE_PKTMBUF_POOL_F_PINNED_EXT_BUF) {
+		/*
+		 * The pinned external buffer should be registered for DMA
+		 * operations by application. The mem_list of the pool contains
+		 * the list of chunks with mbuf structures w/o built-in data
+		 * buffers and DMA actually does not happen there, no need
+		 * to create MR for these chunks.
+		 */
+		return 0;
+	}
 	DRV_LOG(DEBUG, "Port %u Rx queue registering mp %s "
 		       "having %u chunks.", dev->data->port_id,
 		       mp->name, mp->nb_mem_chunks);