mbox series

[v2,0/8] ethdev: introduce hairpin memory capabilities

Message ID 20221006110105.2986966-1-dsosnowski@nvidia.com (mailing list archive)
Headers
Series ethdev: introduce hairpin memory capabilities |

Message

Dariusz Sosnowski Oct. 6, 2022, 11 a.m. UTC
  The hairpin queues are used to transmit packets received on the wire, back to the wire.
How hairpin queues are implemented and configured is decided internally by the PMD and
applications have no control over the configuration of Rx and Tx hairpin queues.
This patchset addresses that by:

- Extending hairpin queue capabilities reported by PMDs.
- Exposing new configuration options for Rx and Tx hairpin queues.

Main goal of this patchset is to allow applications to provide configuration hints
regarding memory placement of hairpin queues.
These hints specify whether buffers of hairpin queues should be placed in host memory
or in dedicated device memory.

For example, in context of NVIDIA Connect-X and BlueField devices,
this distinction is important for several reasons:

- By default, data buffers and packet descriptors are placed in device memory region
  which is shared with other resources (e.g. flow rules).
  This results in memory contention on the device,
  which may lead to degraded performance under heavy load.
- Placing hairpin queues in dedicated device memory can decrease latency of hairpinned traffic,
  since hairpin queue processing will not be memory starved by other operations.
  Side effect of this memory configuration is that it leaves less memory for other resources,
  possibly causing memory contention in non-hairpin traffic.
- Placing hairpin queues in host memory can increase throughput of hairpinned
  traffic at the cost of increasing latency.
  Each packet processed by hairpin queues will incur additional PCI transactions (increase in latency),
  but memory contention on the device is avoided.

Depending on the workload and whether throughput or latency has a higher priority for developers,
it would be beneficial if developers could choose the best hairpin configuration for their use case.

To address that, this patchset adds the following configuration options (in rte_eth_hairpin_conf struct):

- use_locked_device_memory - If set, PMD will allocate specialized on-device memory for the queue.
- use_rte_memory - If set, PMD will use DPDK-managed memory for the queue.
- force_memory - If set, PMD will be forced to use provided memory configuration.
  If no appropriate resources are available, the queue allocation will fail.
  If unset and no appropriate resources are available, PMD will fallback to its default behavior.

Implementing support for these flags is optional and applications should be allowed to not set any of these new flags.
This will result in default memory configuration provided by the PMD.
Application developers should consult the PMD documentation in that case.

These changes were originally proposed in http://patches.dpdk.org/project/dpdk/patch/20220811120530.191683-1-dsosnowski@nvidia.com/.

Dariusz Sosnowski (8):
  ethdev: introduce hairpin memory capabilities
  common/mlx5: add hairpin SQ buffer type capabilities
  common/mlx5: add hairpin RQ buffer type capabilities
  net/mlx5: allow hairpin Tx queue in RTE memory
  net/mlx5: allow hairpin Rx queue in locked memory
  doc: add notes for hairpin to mlx5 documentation
  app/testpmd: add hairpin queues memory modes
  app/flow-perf: add hairpin queue memory config

 app/test-flow-perf/main.c              |  32 +++++
 app/test-pmd/parameters.c              |   2 +-
 app/test-pmd/testpmd.c                 |  24 +++-
 app/test-pmd/testpmd.h                 |   2 +-
 doc/guides/nics/mlx5.rst               |  37 ++++++
 doc/guides/platform/mlx5.rst           |   5 +
 doc/guides/rel_notes/release_22_11.rst |  10 ++
 doc/guides/testpmd_app_ug/run_app.rst  |  10 +-
 drivers/common/mlx5/mlx5_devx_cmds.c   |   8 ++
 drivers/common/mlx5/mlx5_devx_cmds.h   |   5 +
 drivers/common/mlx5/mlx5_prm.h         |  25 +++-
 drivers/net/mlx5/mlx5.h                |   2 +
 drivers/net/mlx5/mlx5_devx.c           | 170 ++++++++++++++++++++++---
 drivers/net/mlx5/mlx5_ethdev.c         |   6 +
 lib/ethdev/rte_ethdev.c                |  44 +++++++
 lib/ethdev/rte_ethdev.h                |  68 +++++++++-
 16 files changed, 422 insertions(+), 28 deletions(-)
  

Comments

Thomas Monjalon Oct. 8, 2022, 4:31 p.m. UTC | #1
06/10/2022 13:00, Dariusz Sosnowski:
> The hairpin queues are used to transmit packets received on the wire, back to the wire.
> How hairpin queues are implemented and configured is decided internally by the PMD and
> applications have no control over the configuration of Rx and Tx hairpin queues.
> This patchset addresses that by:
> 
> - Extending hairpin queue capabilities reported by PMDs.
> - Exposing new configuration options for Rx and Tx hairpin queues.
> 
> Main goal of this patchset is to allow applications to provide configuration hints
> regarding memory placement of hairpin queues.
> These hints specify whether buffers of hairpin queues should be placed in host memory
> or in dedicated device memory.
> 
> For example, in context of NVIDIA Connect-X and BlueField devices,
> this distinction is important for several reasons:
> 
> - By default, data buffers and packet descriptors are placed in device memory region
>   which is shared with other resources (e.g. flow rules).
>   This results in memory contention on the device,
>   which may lead to degraded performance under heavy load.
> - Placing hairpin queues in dedicated device memory can decrease latency of hairpinned traffic,
>   since hairpin queue processing will not be memory starved by other operations.
>   Side effect of this memory configuration is that it leaves less memory for other resources,
>   possibly causing memory contention in non-hairpin traffic.
> - Placing hairpin queues in host memory can increase throughput of hairpinned
>   traffic at the cost of increasing latency.
>   Each packet processed by hairpin queues will incur additional PCI transactions (increase in latency),
>   but memory contention on the device is avoided.
> 
> Depending on the workload and whether throughput or latency has a higher priority for developers,
> it would be beneficial if developers could choose the best hairpin configuration for their use case.
> 
> To address that, this patchset adds the following configuration options (in rte_eth_hairpin_conf struct):
> 
> - use_locked_device_memory - If set, PMD will allocate specialized on-device memory for the queue.
> - use_rte_memory - If set, PMD will use DPDK-managed memory for the queue.
> - force_memory - If set, PMD will be forced to use provided memory configuration.
>   If no appropriate resources are available, the queue allocation will fail.
>   If unset and no appropriate resources are available, PMD will fallback to its default behavior.
> 
> Implementing support for these flags is optional and applications should be allowed to not set any of these new flags.
> This will result in default memory configuration provided by the PMD.
> Application developers should consult the PMD documentation in that case.
> 
> These changes were originally proposed in http://patches.dpdk.org/project/dpdk/patch/20220811120530.191683-1-dsosnowski@nvidia.com/.
> 
> Dariusz Sosnowski (8):
>   ethdev: introduce hairpin memory capabilities
>   common/mlx5: add hairpin SQ buffer type capabilities
>   common/mlx5: add hairpin RQ buffer type capabilities
>   net/mlx5: allow hairpin Tx queue in RTE memory
>   net/mlx5: allow hairpin Rx queue in locked memory
>   doc: add notes for hairpin to mlx5 documentation
>   app/testpmd: add hairpin queues memory modes
>   app/flow-perf: add hairpin queue memory config

Doc squashed in mlx5 commits.
Applied, thanks.