net/mlx5: fix packet padding config for RxQ via DevX

Message ID 20201115142534.31383-1-akozyrev@nvidia.com (mailing list archive)
State Accepted, archived
Delegated to: Raslan Darawsheh
Headers
Series net/mlx5: fix packet padding config for RxQ via DevX |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/travis-robot success Travis build: passed
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS
ci/iol-intel-Functional fail Functional Testing issues
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Alexander Kozyrev Nov. 15, 2020, 2:25 p.m. UTC
  Received packets can be aligned to the size of the cache line on
PCI transactions. This could improve performance by avoiding
partial cache line writes in exchange for increased PCI bandwidth.

This feature is supposed to be controlled by the rxq_pkt_pad_en
devarg and it is true for an RxQ created via the Verbs API.
But in the DevX API case, it is erroneously controlled by the
rxq_cqe_pad_en devarg instead, which is in charge of the CQE
padding instead and should not control the RxQ creation.

Fix DevX RxQ creation by using the proper configuration flag for
Rx packet padding that is being set by the rxq_pkt_pad_en devarg.

Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
Cc: stable@dpdk.org

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/mlx5_devx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Raslan Darawsheh Nov. 17, 2020, 10:52 a.m. UTC | #1
Hi,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Sunday, November 15, 2020 4:26 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>; Dekel Peled
> <dekelp@nvidia.com>; Matan Azrad <matan@nvidia.com>
> Subject: [PATCH] net/mlx5: fix packet padding config for RxQ via DevX
> 
> Received packets can be aligned to the size of the cache line on
> PCI transactions. This could improve performance by avoiding
> partial cache line writes in exchange for increased PCI bandwidth.
> 
> This feature is supposed to be controlled by the rxq_pkt_pad_en
> devarg and it is true for an RxQ created via the Verbs API.
> But in the DevX API case, it is erroneously controlled by the
> rxq_cqe_pad_en devarg instead, which is in charge of the CQE
> padding instead and should not control the RxQ creation.
> 
> Fix DevX RxQ creation by using the proper configuration flag for
> Rx packet padding that is being set by the rxq_pkt_pad_en devarg.
> 
> Fixes: dc9ceff73c ("net/mlx5: create advanced RxQ via DevX")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh
  

Patch

diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index e9ceda5caf..34044fcb0c 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -294,7 +294,7 @@  static void
 mlx5_devx_wq_attr_fill(struct mlx5_priv *priv, struct mlx5_rxq_ctrl *rxq_ctrl,
 		       struct mlx5_devx_wq_attr *wq_attr)
 {
-	wq_attr->end_padding_mode = priv->config.cqe_pad ?
+	wq_attr->end_padding_mode = priv->config.hw_padding ?
 					MLX5_WQ_END_PAD_MODE_ALIGN :
 					MLX5_WQ_END_PAD_MODE_NONE;
 	wq_attr->pd = priv->sh->pdn;